The NY Times recently published an article covering the concerning working conditions at Amazon. It highlights past employees who talk about how Amazon’s culture pushes people to the edges of their mental health and then pushes some more.
While that’s concerning, in a systemic tech industry cancer sort of way, what’s even more concerning is what happened after the article was published. Continue reading
Sometimes you’ll hear people say that Search is a solved problem. It turns out that’s not really true. What we’ve solved is a specific subset of Search where the person doing the searching already knows what it is they’re looking for.
You can go to Google and search for “James Brown” or you can go to Amazon and search for “metal grabber utensil” and the search results are going to be the very things you seek. It doesn’t matter how vague metal grabber utensil is, the second search result is for cooking tongs. That search is solved.
There’s a whole other class of search problems, though, and computers are terrible at them. Continue reading
Check out Part 1 of this piece, Shut Up Everybody.
I’d like to build a website where I can enter in a prediction and who authored it, and then later come back and mark whether the prediction was good or bad. You get a point for a good prediction and lose a point for a bad one. With enough data you’d have a pretty accurate read on whether someone knows what they’re talking about or not. You could even track the predictions on a site-by-site basis and get the same readouts for your favorite blogs.
Similar to Wikipedia, the whole thing works on user-submitted verifiable data. When you enter in someone’s prediction you link to the source. When you have proof that their prediction panned out or not you link to that source as well.
Here’s a couple examples. As I’m writing this post there are a lot of bloggers who feel they need to weigh in on whether the Xbox One will be a success or not. I say let’s write it all down and see who’s wrong! Maybe we’re giving readership to blogs that have no actual credibility or expertise in the field they report on. TechCrunch famously pooh-poohed Twitter when it launched in 2006, but I’m sure they’ve been right on a bunch of stuff too. What’s the exact percentage? Who are the most accurate predictors on staff?
The data is already out there and aggregating it would be a great service to the internet.
Changing the Internet’s Motivations
One of the best ways to enact behavioral change is to start measuring the thing you want to change. If you already read Shut Up Everybody then you know a little about the seedier side of journalism. It’s a business whose success hinges on pageviews and ad impressions which incentivizes opinionated journalism and sensationalism. Many blogs want to bring out the worst in us, because it’s the easiest way to get our clicks.
What if the service I talked about above existed? We can already look at M. Night Shyamalan’s track record on Rotten Tomatoes and learn that his movies in aggregate have only received a rating of 44/100. Metacritic provides a similar service for entertainment. Why aren’t we measuring more things? I want to measure integrity. I want to know when the website I’m visiting has notoriously low integrity. I want their readership to flounder as a result, and I want a competitor with more integrity to take that readership.
If you’re passionate about this subject and you have website building skills you should get in touch with me by sending an email to my gmail address: yayitsandrew.