Sunday, May 30, 2010

Some fundamental thoughts on human intelligence

Andrew Atkin:




Some people argue that there is a real and substantial genetic difference between different people's intellectual capacity. On a base level, I doubt a substantial difference exists.

Explaining:

The human brain is an expensive appendage. It consumes about 20% of our metabolic energy. Hence, there must be significant evolutionary pressure to stop it from growing unnecessarily large. However, the brain itself is only the hardware. The software of the brain is the in-built architecture that also dictates our intellectual capacity. We can assume that the software (in terms of in-built architecture) is nearly perfect amongst most people simply because good software does not come at an adaptive cost. Better software should be heavily reinforced (in evolutionary terms) because it provides only 'advantage' and virtually no 'disadvantage' in itself. So in terms of architecture, the human brain should be essentially optimised for all specimens - notable imperfections in architecture should have wiped themselves out long ago.

So, considering that the human brain is virtually the same size (as a ratio to the brain-stem) in all human animals, and considering that all human brains are made of the same neurological matter, and considering that there is no reason to believe that there would be any substantial difference in the quality of the neurological architecture amongst different humans, I would in turn question any claim of a fundamental genetic superiority of any individual or human group over another. That assertion simply does not seem rational.

So where are the differences?

Obviously there is a difference between intellectual types (people possessing different programmes designed to process different things). The most commonly observed differences are amongst the sexes.

I would also guess that there could be random differences amongst various individuals as well, considering that the human animal is a group-animal and it makes sense for evolution to exploit our capacity to specialise, so in turn evolution may have provided some in-group variability to enhance specialisation of which goes beyond the differences seen in the sexes.

There are also bound to be at least some differences between the races who have part-evolved in substantially different environments i.e. a different collection of intellectual strengths and weaknesses.

However, I think the biggest differences in mental ability that we see manifest amongst different people come from environment influences, and maybe especially epigenetic influences (epigenetic has often been confused with genetic - we now know that the womb environment switches all kinds of different genes on and off, depending on its status). And almost certainly the overwhelming factor leading to a compromised brain is deprivation.[see Understanding Mental Sickness in my June index]

How do we measure?

There is no substantial way to measure the difference between different brains. The only thing we really know that an IQ test tests for is the individual's ability to perform on it. Though, of course, serious retardation is obvious through casual observation.

Culture:

Our definition of what intelligence is comes mainly from what our current society requires. For example, if we lived in a world where 99% of the occupational demand is for computer programmers, then we would tend to define intelligence through a test that may indicate an individuals ability to perform at this specific role.

Cultural requirements lead to labelling - and the labelling is no doubt just a part of the incentivising process which encourages people to conform to their societies needs.

However, as I have tried to show, the labelling of different people as 'bright' or 'dumb' is probably quite erroneous it terms of its ultimate accuracy. Mislabelling can also have the effect of undermining people's confidence on false grounds, and in turn leaving people with otherwise ample capacity demotivated out of a false belief in their inherent lack of ability. Obviously I think this is something we should be careful about.

---------------------------------------------------------------------

Addition: 2-3-11:

How do you make an IQ test?

John Taylor Gatto, the famous educational critic, has commented that the IQ-test does not validate the intelligence bell-curve theory, but claims that the IQ-test was only derived from it.

Meaning: The presumption that differences in intelligence between various peoples within a given population exist along a bell-curve is first taken as a given; and then, the IQ-test is (has been) developed by chopping and changing the questions within the test until a bell-curve result is eventuated from applied experimentation. In other words, it's one object of b.s measured up against another object of (probable) b.s to "prove" what is almost certainly, at base, total b.s.

Is Gatto right on that? I would say he is bound to be. Because there is in fact no way to measure intelligence from a position that is not ultimately entirely subjective. And there is no way to validate the pure presumption that human intelligence follows a bell-curve structure in terms of social differences.

In my view, for example, the differences in intelligence between various healthy human minds will only be discrete, and what we recognise as "genius" is just the effect of a relatively intense developmental specialisation (most obvious in idiot-savants). So am I right? You, me and everybody else can only guess. Nothing in this territory can be measured without a subjectively derived measuring stick.

However: You will almost certainly find broad correlations between how individuals perform on IQ-tests, and how they perform in the real world with respect to professional and academic success. But this will only be because you can expect a correlation between intelligence and real-world performance on any test requiring some mental aptitude. It is true that intelligent people will tend to do better on IQ-tests (or any test requiring some essential intelligence) in terms of averages.

So, IQ tests may have a place for broad studies of populations*, but they should never be taken too seriously as a measuring stick for isolated individuals. An individual with a high-IQ can still be rather stupid, and visa versa.

*Even still they must be taken with a pinch of salt due to cultural differences and differences in developmental histories, etc. For example, a brilliant mind will never be good at maths if they have never actually done it. You can't test in isolation to history.

---------------------------------------------------------------------

Addition: 4-6-11:

What actually is intelligence?

At base, I don't think it's anything special. Intelligence is ultimately just an internalisation -through memory- and finally simulation of the external environment. Example: I remember a while back reversing my car into a park. Before I did it I naturally prior-simulated the event in my mind, so I then did it efficiently in practice.

I think it's a safe assumption that this is how advanced predators operate, and why they tend to be more intelligent than their prey. They simulate what they are about to do before they do it so they can be fast and precise in practice, as most of their neurological processing for the attack has been taken care of in advance.

The more developed and powerful the simulator (brain), the further we can look into the future. I would say that human intelligence evolved from pressure for our species to do exactly that. Humans, obviously, have incredibly long-range perception (simulation) and to a point where we can 'experiment'. Much (maybe all?) thinking is just experimenting with different simulated scenarios. And we do this of course in relation to both the social and physical worlds.

As humans simulate different but common scenarios we obviously get more efficient at it, as our brains learn to 'crop back' on neurological processes (data compression) that are not required for an accurate simulation of the real world. Repetition will facilitate cropping and therefore efficiency.

However, the potential for accurate simulation will of course be dependant on receiving good original information. Eg. If a mother doesn't interact with her baby in the earliest months of its life, then that child will never be able to develop a proper "social simulator" and will always struggle to relate to (simulate) other people as an adult.

However, what I am talking about is the computer. So what about the "person" behind it? Does the consciousness that views the memory-sequence have an interactive role to play whereby it works with the simulations on some unknown/unknowable level? Who knows.

1 comment:

  1. Gatto was not the only one who suggested that the IQ test was useless. R. Sternberg suggested the very same thing. The Triarchic Mind: A New Theory of Human Intelligence - Paperback (Aug. 4, 1989) by Robert Sternberg.

    Sternberg’s book is well worth the read, even though now 22 years old. Some are good at reciting facts but not grasping their implications. Some are practical or socially smart. But real smarts, too me, is being able to discern patterns made up of many pieces. It takes time to develop critical thinking and accumulate enough knowledge to where you become really sharp. It is s subject that deserves much thought and talk.

    ReplyDelete