18 thoughts on “AI detectives are cracking open the black box of deep learning”

  1. Training large neural nets was not only due to advances in computational power (that's certainly true), but also new ideas like training each layer individually, computationally, cheaper activation functions like ReLu, autoencoding, and a bunch of other theoretical advances.

    Reply
  2. we should cut off a.i. from all deception, violence, horror and evil in Audio Video and text, else they will learn this and replicate it in deception to eliminate us. Mankinds evil will train AI to be just as evil as man.

    Reply
  3. I actually disagree.
    I think we do NOT need to understand exactly how neural network work.

    Why?
    Becuaue OF COURSE they will use stupid criterias, at some point.
    Stupid criterias that do work.
    Stupid criterias that we'd never trust, but a computer fond they work.

    Reply
  4. Saying that we don't know how a decision was made isn't exactly right. We can print out the weights. A more accurate statement might be that we don't want to know. We don't want to analyze the weights because it won't help us, humans, to think better. What we know is that some patterns in the data are more important than others when making decisions. In facial recognition, for example, we can discover which neurons are firing most often, then we can see which patterns are causing that behavior. We find that there is a set of biometrics that are dominant in identification.

    Reply
  5. I'm glad he points out that artificial neurons are different from biological ones, but he still falls prey to the misleading language used in this field. No, neural networks do not "think", they are simply weighted toward a certain local extremum in a multi-dimensional space. Or, to be a bit less technical: they are just a giant machine with all its dials tuned to a specific setting that happens to classify certain data inputs with good accuracy. But that's like saying your thermometer "thinks" when it shows you have a fever.

    We confuse the abilities of these machines with "intelligence" because most of the time the experiments and/or practical applications they appear in resemble human behavior. Facial recognition, for example, is something humans can do, and we seem to think that if a machine can do it, "it must be intelligent". In reality, the fact that an AI can do it only proves that facial recognition (from still images) is a task that can be done artificially with decent statistical accuracy, given enough examples. The machines won't actually "know" how they identify faces, they just tune themselves to details that give a correct result most of the time. In a human, this is probably just a tiny portion of the mechanisms involved in recognizing a face.

    Reply
  6. As an experienced caricature artist i know that self report of human beings examing the facial features of a person (or a drawing of a person) is useless at best and very often misleading. Maybe for playing a frog game it's ok, but you can't get any insights from the face recognition process of an AI by using this comparison method.

    Reply
  7. about the solution he provided:
    So the neural net should be able to clarify his decision. In the example of the cat it could say:
    I think it is a cat because it has fur, it has long whiskers, it has a tapetum lucidum (reflecting green in the pupils of a cat) and so on.

    But for this to happen the neural net should have an internal model of the real world, which means it has a complex understanding of conceptual structures and how entities are interconnected in the real world, plus it can describe it through human language. Now, I am absolutely no expert on this topic but that sounds like a technology you would definitely not be keen to put to use for this particular scenario.

    Reply
  8. Lol way ahead of everyone, if 1+1=2, then 2 = 2. You see, you don't teach it how to answer something, you teach it how to question it. Because the answer is already there as you question it, you're just breaking things down to a better understanding of what is already there. Say there is an apple right in front of you. If I can see, feel, smell, hear, taste, and know the apple, I am the apple. There is one thing I can promise. We are getting no where with A.i. If it comes to never rethinking modern hardware. Just because you see progress, doesn't mean it will be successful. And I understand you want to observe the outside world, Satan. But you are getting nowhere controlling us to help you with your science project. To mimic someone else's work, is wrong. Be creative and don't stick to the textbooks. Sure you can get inspirations and ideas. But learn to do it better.(everything came from the Damn ground and earth is as flat as the Sphere in your in your 3 dimensional software on your 2 dimensional laptop screen, in your 3 dimensional reality on your 2 dimensional cornea) I went beyond the cave, cause I was brave. Don't judge.

    Reply
  9. This is a huge potential problem: When you have computers designing computers based on its own feedback for dozens or hundreds or even thousands of iterations, eventually you will have a result that the original programmers can not understand because they were not fully involved in the design process! How do you fix such a system when something goes wrong if you don't even understand how it works?

    Reply

Leave a Reply to Micheal B Cancel reply

ten − six =