I mentioned in the last post that I thought there were problems with science. The major problem with science is that it is funded and directed in a way that echoes the broader inequities of society. A prime example of this is the focus there is at the moment on producing a self navigating and driving car. I think the case can be made that we need to get rid of the personal auto, remove it from the way we organize society. Given that it is a complete waste of time for us to spend money figuring out how to make a car run without a driver. Of course, there is also the fact that we already know how to have a car, or transport of some sort, run without being immediately directed by humans, put it on rails. We do this all the time, mass transit works.
That won't help the military, which is the main organization behind automating cars. The idea, supposedly, is that it will save lives by not putting humans in harms way. What they really mean by this is that it will save the lives of the military that has this technology, and, probably, kill more of the people in countries whose military don't have this technology. Sure, it is likely that advances in this field will help other areas of AI research, and that isn't necessarily a bad thing. But, I'll be damned sure that I don't want the military to develop the first strong AI, or really even the government. I can't think of a single government I would trust with the thing.
It's the same thing with facial recognition. It could, in theory at least, help with the development of strong AI, but doesn't the path that we take to get to the post-human future as important as the place we end up? I don't want to get to a post-human future just to have everything and everyone monitored all the time. To have that be the expectation. That sounds fully craptastic.