What happens when you train an artificial neural network to recognise images, then turn the system around to start with random noise and evolve an image representing what it “sees” when you ask it about things that appear in pictures, which could be anything from a banana to a landscape? Apparently, you discover that the software is tripping its nonexistent tits off and hallucinating like mad.
LUCY IN THE SKY WITH DUMBBELLS
Google obviously have a lot of time and money invested in technologies for image searches and classification. The digital learning systems responsible for these images– some of which have been going viral recently, 99% of the time without any context whatsoever apart from LOL weirdness– analyse examples of what the programmers…
View original post 553 more words