· 3 min read
Story of Wrangell Data
Wrangell Data started with a problem. I had taught some classes about data cleaning and preparation. We used Python and notebooks, because that’s what I knew, and was comfortable teaching.
When I talked to people after the class, they would tell me that it wasn’t particularly intuitive, and that it was hard to make the data cleaning repeatable.
I’d also come across people who were using spreadsheets to do data cleaning and merging. They were doing a lot of manual work. Even though there are better ways to automate things with spreadsheets, the features aren’t intuitive or obvious.
Things hadn’t really seemed to change much, even as the rest of technology marched on.
There are a lot of data tools and startups out there, but they all focus on the same things - massive amounts of data, complicated integrations, technology-first approaches.
That’s great if you really have a lot of data, and are planning to analyze it, but it leaves a lot of people out. It’s also really expensive to store that amount of data and process it.
The First Idea
My original idea in this space was to build a web-based spreadsheet that had data cleaning tools built in, along with some scripting and API integrations. I wanted to target developers who use spreadsheets, and give them a power tool.
The more I thought about it, I realized that I was focusing on the wrong people - developers are comfortable writing scripts and putting together pipelines.
There are also a lot of creative tools out there focused on the developer market, especially in the spreadsheet space. After doing some research, I saw that I would have to build out quite a lot just to reach parity.
The Pivot
I still wanted to focus on data cleaning and preparation, but I wanted to focus on the people who were doing it manually.
That made me think about the type of data that these people were working with. It was human-scale data - data that a person could reasonably work with.
So many of the solutions out there cater to the enterprise, or to people who have a lot of data to store. I wanted to build a tool for everybody else. Something that was intuitive, easy to use, and didn’t require a lot of setup to get results.
Why the name Wrangell Data?
One of the hobbies I’m most passionate about is visiting United States national parks. There are currently 63 national parks in the United States, and I’ve visited 54 of them.
In the summer of 2024, I visited Wrangell-St. Elias National Park in Alaska. It is on the road system of Alaska, but the main visitor area is 60 miles off the paved roads, down the famous McCarthy Road.
The good news is that the McCarthy Road is much more tame than it used to be, and we had no trouble taking our rental car down the road.
While we were in the park, we explored the old copper mine buildings, took a hike on the Root Glacier, and got to see a jokhulhlaup - a glacial outburst flood.
I can’t just completely turn off my problem-solving brain, so I’d think about some of the things I wanted to create when we got back to the Lower 48.
The name Wrangell Data is a play on words between the national park name and the idea of wrangling data. I hope you like it!