« Flatland or Fantasia – Is Everybody Un-Integral? | Main | What's in a name? Foresight spotting in academia »

Thinking about thinking: what does this have to do with the future?

If Critical Futures Studies can be characterised by the idea that "the way we think about the future influences the futures that we get", then what might the implications be for the future of the way that we think about thinking itself? This is a theme that I would like to explore here over the next little while.

Personal archaeology - stream A
If I was to trace the awakening of my own interest in futures studies and foresight, I might find the origins in a rather strongly themed period around 1998-99 during which I read Eric Drexler's Engines of Creation, Out of Control by Kevin Kelly, Nanosystems by Drexler, Hans Moravec's Robot: Mere Machine to Transcendent Mind and Ray Kurzweil's The Age of Spiritual Machines. This all culminated with attendance at the Seventh Foresight Conference on Molecular Nanotechnology in October 1999. I would say that this exploration, underpinned by my mechanical engineering background, was essentially carried out in the spirit of appreciative enquiry. Even so, the state that I found myself in following all this sat somewhere between "perplexed" and "profoundly uneasy". Something seemed to be missing in all this.

Personal archaeology - stream B
During the same three-month work assignment during which I read Engines of Creation, at Hindustan Copper Limited's Indian Copper Complex in Ghatsila, Bihar in 1998, I started to develop more seriously my interest in what I would generally call "consciousness and mind studies". This had been lying dormant for a few years since my previous stint working at this site in 1995. This interest has grown over the intervening (almost) decade.

Stream integration
There seemed to be some relationship between the Stream A interests and the Stream B interests, but not much hope at this stage of seeing how they fit together. Roll forward several years, and a with few twists and turns in the track I embark on post-graduate study in futures and foresight, where the focus is on Critical and Integral Futures. As it turns out, this has provided an excellent launching pad for looking a little more deeply at what is going on with views of the future revolving around the technological singularity concept and transhumanism. Mind and consciousness studies play a central role here.

So with this providing a little background on the "why" of this theme...

Machine translations: have rumours of the demise of knowledge work been greatly exaggerated?
Last week, I received a link to this article by Steve Vatek about machine/computer translation. While largely supporting Vatek's view that computers are unable to perform the job of human translators due to the absence of a capacity for interpreting meaning associated with a string of symbols, rather than due to a soon-to-be-addressed inadequacy of processor speed, I suggest there are at least two respect in which where we might need to exercise caution before pronouncing machines incapable of meaning-making:

Firstly, we may well be able to build "machines" that can translate as well as humans - but I wonder if by that time we will essentially be building humans, rather than translating machines. That is, I wonder how much redundancy there is in humans: might we need the whole human, embedded in whole social and physical environments, before the "machine" output is judged to be of similar quality to that of human translators. And when we reach this stage, has the whole point of building machines to replace people perhaps been lost?

Secondly, I wonder also if machines might have significant capacity for understanding meaning - but within a framework particular to the machine world, that is, in the case of the translating machine, a world of symbol manipulation that is essentially isolated from the human world that
comprises the referents of those symbols. What I mean by this is that perhaps when a machine responds to or interacts with its environment - in this case an environment comprising 1s and 0s - it is engaging in a form of cognition in its own right, but one that is very much more basic than human cognition. Perhaps this will develop further, within the constraints of the domains within which machines interact with their environment. It might be more appropriate to consider this in terms of a hierarchy of meaning and understanding, based on the complexity of the environment within which actions take place.

TrackBack

TrackBack URL for this entry:
http://www.alexburns.net/mt/mt-tb.cgi/45

About

This page contains a single entry from the blog posted on February 7, 2007 11:27 AM.

The previous post in this blog was Flatland or Fantasia – Is Everybody Un-Integral?.

The next post in this blog is What's in a name? Foresight spotting in academia.

Many more can be found on the main index page or by looking through the archives.