Arts, Psychology, Research, Science

Psychology: Choice

POSITIVE PSYCHOLOGY

The more alternatives, the more difficult the choice.

It goes without saying, that some choice is good and that more choice is even better. The freedom to choose lies at the heart of any democratic, equal and healthy society based on a free market, ranging from choices as important as to which school our children attend, who to vote for, to choices as mundane as to what to eat from the canteen menu, what to wear and which TV programme to watch this evening. The flipside of having choice is that we also have to take responsibility for the decisions we make – consequences may arise.

Various studies suggest that feeling that we can control our destiny is vital to our psychological well-being, and that limiting personal choice reduces well-being. There is no doubt that over the past 20 or 30 years we have been seduced by the power of choice, to the point that most of us take it for granted, and don’t really give it a second thought. Choice means we have freedom. It means we can express who we are as individuals and it’s central to our identity. Denying or restricting choice is considered something to be avoided at all costs. Choice is now central in every domain of our lives.

But is having greater and greater personal choice really better for us? Some psychologists believe not, and have shown in research that increased choice makes us unable to make decisions and reduces our well-being. Barry Schwartz, acknowledged world expert on the psychology of choice, states that the fact that some choice is good doesn’t necessarily mean that more choice is better. Schwartz refers to this as “the tyranny of choice”.

Four decades ago, sociologist Alvin Toffler described a psychological reaction to constant change and too much choice as “future shock”. He theorised that faced with too much choice – which he called “overchoice” – in too short a period of time, decisions would be harder and take longer to make as we’d have to process much more information. This would lead to slower reactions and decisions, and ultimately to psychological issues such as depression, distress and neurosis.

Recent research in psychology backs this up, suggesting that there are a number of problems associated with having too much choice. For example, in order to make a choice you’ll have to make some form of comparison between the different alternatives, which means sifting through an increasingly large amount of information about each one.

Some parts of the NHS appointments service in the UK utilises a “choose and book” system. Previously, in years gone by, patients would have gone directly to their local hospital; now there are pages of statistics from several hospitals within a 30-mile radius to wade through, including details on infection and mortality rates, car-parking availability and staff satisfaction rates. In situations like this, even if the majority of the available pieces of information are irrelevant to the choice you’re making, you still have to decide whether or not to take each one into account. It goes without saying that the volume and complexity of information you have to deal with increases the likelihood of making the “wrong” choice or making a mistake. In short, having too much choice causes you to worry, and is likely to lead to lower rather than higher well-being.

Findings from various experimental studies challenge the implicit assumption that having more options is better than having fewer. For example, shoppers are more likely to buy gourmet jams or chocolates and students are more likely to complete an optional class essay when they’re offered a limited array of six choices rather than an extensive array of 24–30 choices. What’s more, the shoppers reported greater subsequent satisfaction with their selections, and the students wrote better essays when their original set of choices was limited.

Psychology researchers conclude from these studies that having too much choice can have significantly demotivating effects. In relatively trivial contexts, not making a decision, such as going home without buying a pot of jam or a box of chocolates, is neither here nor there. More worryingly, choice overload may hinder decision-making in other more serious contexts, such as choosing medical treatment, especially where there are (or are perceived to be) costs associated with making the “wrong” choice, and where it takes the chooser a significant amount of time and effort to make an informed decision.

Are you a maximiser or a satisficer?

Back in the 1950s, Nobel prize-winning social scientist Herbert Simon introduced the distinction between maximising and “satisficing” as decision-making strategies. A maximiser is someone who wants to make the best possible choice, and so they complete an exhaustive study of all the available options before making their decision. A satisficer, on the other hand, is someone who is looking to make a “good enough” choice, so they keep looking at options only until they find one which meets their minimum requirements.

It’s unlikely you’re a 100 per cent maximiser or 100 per cent satisficer, although you’ll lean more towards one than the other. If you agree with statements such as “I never settle for second best,” and “Whenever I’m faced with a choice, I try to imagine what all the other possibilities are, even ones that aren’t present at the moment” you’re more likely to be a maximiser than a satisficer.

Although studies show that people who maximise tend to get better, higher-paying jobs than satisficers, at the same time they take longer to settle in and they’re more stressed, anxious and frustrated! Maximisers are also more prone than satisficers to be affected by social comparisons and have doubts about their ability compared to others.

. Science Book

Standard
Arts, Psychology, Science

Can I raise my IQ?

IQ SCORE

Intro: We revere intelligence and award “smart” people with a high IQ score, but are we actually born with our intelligence, and can we learn to be smarter?

THE IQ (Intelligence Quotient) score is an internationally-accepted measure of total brain power, calculated via a series of tests. These tests usually involve numeracy, spotting patterns, and logic – so if your forte lies in practical problem solving, negotiating with others, or creativity, the chances are you won’t excel at an IQ test.

Intelligence has no clear definition. Like beauty or personality, it’s relatively subjective, and this is one reason why IQ scores are problematic.

Another issue is that IQ tests have historically been created by (mostly) men in Europe and North America – and are skewed in favour of people from Western culture. Someone from a community that values storytelling, for example, may have great verbal reasoning and memory, yet their overall IQ score might be low because they flopped on the number puzzles.

So, although scientists have worked hard to make tests reliable and relevant across cultures, IQ tests have a limited use. For mentally taxing jobs such as programming, a test is an effective barometer for picking the best candidate. However, if we were patients given the choice between a 22-year-old novice surgeon with a genius IQ and a 55-year-old expert with countless surgical operations under their belt, we will all know who to choose.

Experience, knowledge, social skills, drive, and conscientiousness – all of which could all be considered intelligence – are not accounted for in the conventional IQ test. Even though young adults tend to get the best overall IQ scores, just like a fine wine, many of our abilities continue to improve with age.

Good IQ score or not, everyone can improve their cognitive powers, regardless of their age, schooling, and past experience. Don’t be sold on quick fixes: brain training games and programs will help you get better at those games and tasks, but rarely translate into any practical thinking powers. To get really good at something – be it memorising place names, coding software, or crafting musical compositions – you’ll need to practise that particular skill for many hours.

IQ’S disturbing history

The first intelligence tests were devised by French psychologist Alfred Binet in the early 1900s as a benevolent way to find the least able children who needed special schooling. A strong believer in the idea that intellectual prowess was not set in stone and could be improved with teaching, practice, and discipline, Binet insisted that intelligence tests should never be used “for ranking [people] according to mental worth”.

Soon after his death in 1911 however, Binet’s test, later named “IQ” tests, were seized by scientists in the eugenics movement who reformulated it for adults, labelling those who scored poorly as “feeble-minded” or “degenerate”. In the early 1900s, 30 US states passed laws that meant low-scoring people could be forcibly sterilised. By the middle of the 20th century, around 60,000 people had been sterilised against their will.

Adolf Hitler also espoused the IQ test and created his own stylised version. Hundreds of thousands of low-scoring people were duly sterilised or executed in Nazi Germany.

Standard
Arts, History, Science

Quantum Leaps: Albert Einstein

1879–1955

OF the essays written by Einstein in 1905, arguably the most influential was his enunciation of a “special” theory of relativity, which advanced the idea that the laws of physics are actually identical to different spectators, regardless of their position, as long as they are moving at a constant speed in relation to each other. Above all, the speed of light is constant. It is simply that the classical laws of mechanics appear to be obeyed in our normal lives because the speeds involved are insignificant.

The Speed of Light

But the implications of this principle if the observers are moving at very different speeds are bizarre and normal indicators of velocity such as distance and time become warped. Indeed, absolute space and time do not exist. Therefore, if a person were to theoretically to travel in a vehicle in space close to the speed of light, everything would look normal to them, but another person standing on earth waiting for them to return would notice something very unusual. The space ship would appear to be getting shorter in the direction of travel. Moreover, whilst time would continue as “normal” on earth, a watch telling the time in the ship would be going slower from the earth’s perspective even though it would seem correct to the traveller (because the faster an object is moving the slower time moves). This difference would only become apparent when the vessel returned to earth and clocks were compared.

If the observer on earth were able to measure the mass of the ship as it moved, he would also notice it getting heavier too.

Ultimately, nothing could move faster than or equal to the speed of light because at that point it would have infinite mass, no length, and time would stand still.

A General Theory of Relativity

From 1907 to 1915, Einstein developed his special theory into a “general” theory of relativity which included equating accelerating forces and gravitational forces. Implications of this extension of his special theory suggested light rays would be bent by gravitational attraction and electromagnetic radiation wavelengths would be increased under gravity. Moreover, mass, and the resultant gravity, warps space and time, which would otherwise be “flat”, are turned into curved paths which other masses (for example, the moons of planets) caught within the field of the distortion follow.

Amazingly, Einstein’s predictions for special and general relativity were gradually proven by experimental evidence. The most celebrated of these was the measurement taken during a solar eclipse in 1919 which proved the sun’s gravitational field really did bend the light emitted from stars behind it on its way to earth. It was the verification which led to Einstein’s world fame and wide acceptance of his new definition of physics.

Einstein spent much of the rest of his life trying to create a unified theory of electromagnetic, gravitational and nuclear fields but failed. It was at least in keeping with his own remark of 1921 that “discovery in the grand manner is for young people and hence for me is a thing of the past.”

E=MC²

Fortunately, then, he had completed three other papers in his youth (in 1905) in addition to his one on the special theory of relativity! One of these included the now famous deduction which equated energy to mass in the formula E=mc² [where E=energy, m=mass and c=the speed of light]. This understanding was vital in the development of nuclear energy and weapons, where only a small amount of atomic mass (when released to multiply by a factor of the speed of light squared under appropriate conditions) could unleash huge amounts of energy.

The third paper described Brownian motion, and the final paper made use of Planck’s quantum theory in explaining the phenomenon of the “photoelectric” effect, helping to confirm quantum theory in the process.

Further Achievements

Almost inevitably, Einstein was also drawn into the atomic bomb race. He was asked by fellow scientists in 1939 to warn the US President of the danger of Germany creating an atomic bomb. Einstein himself had been a German citizen, but had renounced his citizenship in favour of Switzerland, and ultimately America, having moved there in 1933 following the elevation of Hitler to power in his home country. Roosevelt’s response to Einstein’s warning was to initiate the Manhattan project to create an American bomb first.

After the war Einstein spent time trying to encourage nuclear disarmament.

In 1922, Albert Einstein was awarded the Nobel Prize for Physics.

. Science Book

Standard