logo Ming the Mechanic - Category: Knowledge
An old rigid civilization is reluctantly dying. Something new, open, free and exciting is waking up.


Monday, May 28, 2007day link 

 Owning your learning path
picture Picking from some of the interest presentations and discussions coming up at reboot, Ton Zylstra and Elmine Wijnia will focus on tools for owning your learning path.
What does it take to be the owner of your own learning path, so that you can reach your goals?
And if we can say something useful about that learning path, does that give us the means to pro-actively shape the social software tools that give us the affordances we need?
Empowering people to be in control of their own lives is we think the single most important thing to reach for.
With social software tools more people than ever are able to share their thoughts and work, and interact with others.
A lot of that development however is technology driven. It is a side-effect of existing changes.

But now we are at a stage that we can turn that around, and start thinking about social software tools in terms of the affordances empowered indviduals really need.

Learning is a continuous activity and primary source of empowerment. Being able to control your own learning path is therefore an important issue. What does it take to own your learning path? That is what we would like to explore in this conversation.

How to know what to learn?
How to know when to learn it?
How to know how to learn it?
How to know when you've learned it?
How to seek out the 'right' social and physical environment to help and allow you to learn?

Working these questions out in more detail will create, we think, a model that allows us to identify the skills and tools individuals need to own their own learning paths. That in turn allows us to look at the landscape of available tools, and start telling the tool smiths where to go next.
I like that. Are there really any good tools that support learning in life? For that matter, are there any good tools for supporting ongoing progress and development in some area, or in several areas at the same time? There's project management software if you have a specific business project with targets and deadlines. There are systems for staying organized. Many systems for recording stuff, like blogs and wikis. There are ways of scheduling things, like, of course calendars. But learning? There is course software, for presenting lessons and doing tests and that kind of thing. But for learning in life? I can't think of anything, so that's a great thing to work on.
[ / | 2007-05-28 22:15 | 5 comments | PermaLink ]  More >


Monday, March 26, 2007day link 

 Savants and synestesia
picture Daniel Tammet is a savant with quite fantastic mental abilities, and only few of the negative effects of autism. You can see a 50 minute documentary with him here. Amongst other feats, he has recited Pi to 22,500 decimals, and he can learn a new language in a week. In that program they put that to the test, by asking him to learn Icelandic in 7 days. Icelandic is very hard, but at the end of the week, he was interviewed on Icelandic TV and had obviously mastered it.

Part of what is interesting is that, as many autistic savants, part of his trick is synestesia. What is particularly unique about him is that he can articulate his own mental processes. He loves numbers, but he doesn't really do calculations. He experiences each number as a certain visual pattern. Each number from 1 to 10,000 has a certain distinct shape and color to him, which he can draw or model in clay. When he's asked to calculate something, the result sort of flickers in front of his eyes, and he simply reads off what he sees.

One of the researchers tried to throw him off by presenting him with a section of the decimals of Pi which was wrong, with some digits in the wrong place. And, whereas the real series of decimals is pure beauty to him, the false series gave him a strong reaction of being wrong and disharmonious.

What's sort of interesting and inspiring about people like that is that it hints at the possibility that anybody could do the same thing, if we better understood how. Their brains have somehow become short-circuited a bit, so they don't have the filters 'normal' people have, but they have more direct access to their abilities. Which often comes with a cost of lost functionality in some other area, or an inability in understanding emotions. But sometimes it doesn't.

.. Oops, I actually wrote about him before. I was looking for a picture of him, and Google suggested I'd find it on ming.tv. I guess I don't have perfect memory.
[ | 2007-03-26 20:35 | 3 comments | PermaLink ]  More >


Wednesday, February 7, 2007day link 

 Topology homework
picture I wish I were an expert in topology and that I easily could visualize multi-dimensional structures. My intuition tells me there would be some kind of deep wisdom about life and the universe to find there. But, now, read about the best homework ever done. I don't know if that stands up, but that's what a math teacher said about the work of one of his students, Cassidy Curtis, who seems to have the unusual ability to visualize complex multi-dimensional structures, and draw them on paper, better than one can do with most available computerised tools. Like, see this thing about a torus in 3D space. Or, how about getting it as a tattoo? (Found on Metafilter).
[ | 2007-02-07 16:03 | 10 comments | PermaLink ]  More >


Wednesday, December 6, 2006day link 

 Nice place you have here
I haven't used Microsoft SharePoint, but Euan's Commentary on why he doesn't like it applies more widely:
If social computing is going to be effective in the workplace things have to be different - fundamentally.

When we started building this stuff at the BBC we were consciously trying to build the online equivalent of a collection of Cotswold villages with lots of footpaths between them. You know where the pub and church are, you’re comfortable in the environment and you can locate yourself. Corporate systems tend to be more like Milton Keynes. On the surface they’re efficient with lots of straight lines and signposting, but you get lost because everything looks the same.

Using a new tool really does feel like walking into a room and working out what the atmosphere is like, what the other people are like, whether they feel like people I could get on with and whether we will be allowed to take our time to form a relationship and begin to get things done. Dave Snowden was right when he said that you can’t manage knowledge but you can create a knowledge ecology.

I can't put my finger in what it is - the graphics, the language used or the intentions behind the software but I rarely get this feeling from Microsoft stuff especially not SharePoint. They are too good at creating sterile environments run by control freaks who hate messiness, consider conversations unprofessional and rarely understand the true pulse of their organisations.

This stuff may be seen as "business-like" at the moment but I don't believe it will be what business is like in the future.
I hope not. Here's to human business networks.
[ | 2006-12-06 22:22 | 11 comments | PermaLink ]  More >


Saturday, November 18, 2006day link 

 Center for Collective Intelligence
picture The Center for Collective Intelligence is a new academic center at MIT. This is their basic research question:
How can people and computers be connected so that—collectively—they act more intelligently than any individuals, groups, or computers have ever done before?
Ah, very good news. They start with observations like how Google or Wikipedia manage to pool the wisdom of many people in ways that often are better than what any individual or traditional group could accomplish. They have projects focusing on such things as:
  • How can large groups of people produce high quality written documents? For instance, how can the lessons of Wikipedia be applied to other groups and other kinds of documents? What kinds of technologies and motivational structures are needed?

  • How can groups of people make accurate predictions of future events? For instance, in prediction markets, people buy and sell predictions about uncertain future events, and the prices that emerge in these markets are often better predictors than opinion polls or individual experts. When and how do these prediction markets work best? How can they be combined with simulations, neural nets, and other techniques?

  • How can we harness the intelligence of thousands of people around the world to help solve the problems of global climate change? For instance, how can we use innovative combinations of computer-based simulations and explicit representation of argumentation to help people identify and analyze different policy alternatives?

  • How can we create an on-line, searchable library of books from many languages and historical eras? For instance, how can we harness a combination of human and machine intelligence to recognize the images of words in these books?

  • How can we help create commercially sustainable products and services for low-income communities around the world? For instance, how can we use cutting-edge technology to help a world-wide network of entrepreneurs and investors rapidly find, analyze, and replicate successful projects?

  • And they're working on a book called We are smarter than me. And I learned of this, of course, via Blog of Collective Intelligence.
    [ | 2006-11-18 18:12 | 18 comments | PermaLink ]  More >


    Sunday, November 5, 2006day link 

     Simplicity and embracing constraints
    Jon Udell mentions Dabble DB:
    The October episode of The Screening Room features Dabble DB, a web-based workgroup database that, in the style of 37Signals, focuses on simplicity and embraces constraints. Dabble doesn't aim to do full-blown database application development, or sophisticated query, or heavy transactions. Its mission, instead, is to enable teams to easily manage and flexibly evolve modest (say, 30- to 50-megabyte) quantities of structured data.
    Now, Dabble is first of all cool, but that brings up several things. Dabble is a database you create on the fly online, which you very easily can import stuff into, particularly from spreadsheets, and you can very easily change it around, add fields, etc. And it is often intelligent enough to figure out what you might want to do, and making it easy.

    That reminds me first that I made a program like that several years ago, where you create a database online, and you can change it on the fly, and import stuff into it, and you automatically have screens for searching, sorting, updating it, etc, and you don't need to know much about databases. My web app isn't nearly as cool, doesn't do any Ajax tricks, and doesn't make it nearly as easy. There are people using my app, who're happy with it, but I never got it to a point where it would be meaningful to market it.

    That's a bit of a puzzle I have as a programmer. Should I make a bunch of different functionalities that are tied together, but spread myself so thin that it ends up being a bit mediocre and not quite finished? Or should I get a team together and do it better? Or, should I do what is the current Web2.0 fashion - do one rather simple, limited thing, but make it very cool and easy to use, and then charge people per month for using it.

    As he says, focus on simplicity and embrace constraints. Make it very simple, refuse the temptation to add more advanced features. Make the simplicity and lack of features BE a feature.

    Like 37 Signals, obviously. Very smart people making very simple programs, even bragging about how quickly they made them, but making them so well that they're very intuitive to use, and so compelling that thousands will happily spend $10 per month for using one of them.

    I'm trying to talk myself into it... Intellectually I get it, but instinctively I'd tend to go for a too-complicated solution, when a more simple would do.

    My pet programming project the last couple of years has been a collaborative environment I call OrgSpace, which has wikis, blogs, calendars, forums, chat rooms, project management, contact lists, event management, databases, a shopping cart, workgroups, and more, all integrated with each other. But that makes it so damned complicated that it is a bit crazy to try to do single-handedly.

    And some of the things I've done successfully have been very simple. Like the webcam thing I did in Opentopia. Took me not much more than a weekend to do, but it has gotten more hits than anything else I've put on the web. I recently sold that website, and it all sort of speaks for the idea of doing simple and cool things and getting them out the door.

    So, I'm trying to meditate on that simplicity-and-embracing-constraints message.

    It is like The Mythical Man-Month. Complexity grows exponentially as, uhm, things get more complex. When a software project grows so complicated that one person can't easily understand all of it at one time, the need for communication between team members makes it quickly mushroom way out of proportion. A small team of 3-5 people can be very productive, but anything beyond that starts getting crazy. For that matter, even if it is a one-person project, if he can't easily comprehend the total project at the same time, it gets too complex as well.

    Scaling it up much further, how about understanding and solving humankind's problems? Does some committee need to somehow solve the big problems together, even though they have a hard time understanding the full system? Or is it maybe better if small groups of people solve very specific and limited problems, but they solve them so well that their piece becomes a fine building block for bigger solutions, which they maybe can't even imagine.

    I could say that this is maybe nature's way of dealing with complexity. A whale is very good at being a whale and plankton is very good at being plankton, but none of them lose any sleep over pondering the complexities of the ecosystem. And yet they're integral parts of it. If they made a big committee, they probably wouldn't do as good a job at it.

    It could be a human type of hubris that we think it is our job to come up with big general solutions for complex problems we don't really understand. Rather than coming up with elegant and complete solutions to small problems that we do understand. I'm not sure. Just a philosophical thought.
    [ | 2006-11-05 20:36 | 24 comments | PermaLink ]  More >


    Wednesday, October 18, 2006day link 

     Dawkins
    I have many friends who're great fans of Richard Dawkins. I'm not. I think he's .. well, I'll quote David Weinberger
    I'm an agnostic, but I find Richard Dawkins an embarrassment for my side, so to speak.

    In his interview at Salon (either subscribe or watch an ad), conducted by Steve Paulson, the British biologist goes through his highly marketable outrage about religion. But, while he thinks he's arguing against all "Abrahamic" religions, he's in fact arguing against one branch of one religion. He seems to have not the slightest idea that not all religions think of faith as he characterizes it, and some "Abrahamic" religions don't really much care about faith in the first place.

    He has not done his homework. He does not recognize differences in the phenomena he's studying. He is being a crappy scientist. And he's stirring up hatred and misunderstanding...exactly what he accuses Religion of doing.

    He ought to shut up for a while and go hang out with a variety of religious folks. Field work, Richard, field work! ...
    Dawkins seems just as religious to me as those which he imagines to oppose. A fundamentalist. Firm, unshakable belief, without bothering to ever verify anything. A bad representative for science.

    And, yes, what he argues against is just a particular subset of the subjects of religion and spirituality and concepts of supreme beings and higher intelligences. He argues against this guy with the white beard who supposedly has created the universe. Which is certainly the easiest target, like arguing against Santa Claus or the Tooth Faery. The mistake he makes is that he lumps all the other stuff in with it, and acts like he somehow has proven that there's no higher intelligence in the universe, merely by pointing out that the guy-with-the-white-beard thing is a little silly and improbable. It is a little childish, and as far as scientific methodology is concerned a completely inane approach, and I'm surprised that so many otherwise intelligent people regard him as such a hero.

    In the interview there, Dawkins admits that science has no clue what consciousness is. Doesn't seem very scientific to then jump to the conclusion that the universe of course doesn't have any.
    [ | 2006-10-18 20:16 | 22 comments | PermaLink ]  More >


    Tuesday, July 18, 2006day link 

     Virtual Telepathy
    Science Blog:
    Scientists at The University of Manchester have created a virtual computer world designed to test telepathic ability.

    The system, which immerses an individual in what looks like a life-size computer game, has been created as part of a joint project between The University's School of Computer Science and School of Psychological Sciences.

    Approximately 100 participants will take part in the experiment which aims to test whether telepathy exists between individuals using the system. The project will also look at how telepathic abilities may vary depending on the relationships which exist between participants.

    The test is carried out using two volunteers who could be friends, work colleagues or family. They are placed in separate rooms on different floors of the same building to eliminate any possibility of communication.

    Participants enter the virtual environment by donning a head-mounted 3D display and an electronic glove which they use to navigate their way through the computer generated world.

    Once inside participants view a random selection of computer-generated objects. These include a telephone, a football and an umbrella. The person in the first room sees one object at a time, which they are asked to concentrate on and interact with.

    The person in the other room is simultaneously presented with the same object plus three decoy objects. They are then asked to select the object they believe the other participant is trying to transmit to them.
    I hope it works.
    [ | 2006-07-18 13:25 | 11 comments | PermaLink ]  More >


    Monday, July 17, 2006day link 

     Volapük
    I didn't know it was a real language. Volapük. The Universal Language Nobody Speaks. More from Metafilter. This is the start of the Lord's Prayer in Volapük:
    O fat obas, kel binol in süls, paisaludomöz nem ola!
    Not that I have much use for that, but that kind of shows why it maybe didn't catch on.

    In Danish, "volapyk" has become a general term for "incomprehensible nonsense". Which maybe is deserved, if you don't speak it. But it sounds like it could almost be as fun as learning Klingon, although not quite. Here's a little rundown of the origins of Volapük:
    An old German peasant once wrote to his son in America, asking for money. The U.S. postal authorities returned his letter because they couldn't decipher the address—understandably, given that the old man knew no English and didn't write German very well. He complained to his neighbor, a retired priest named Johann Martin Schleyer: now I have no money. Schleyer was sympathetic. His health was poor, and he had to support his own aging father on the small pension he received from the Church. What was needed, he decided, was a better means of international communication. So Schleyer invented one: He called it the National Alphabet, a system of 37 letters which could express the sounds of any language in the world.

    Was his neighbor grateful? All we know is that no one used the National Alphabet, that letters continued to go astray, and that Schleyer, saddened by the failure of his system, developed insomnia. One sleepless night in March 1879, he received a communication from God, instructing him not to despair, and to make a new language that everyone in the world could speak. Schleyer already knew more than 60 languages (although how well he spoke any of them, other than German, isn't clear; see "Umlauts"). In a year, he distilled his knowledge into a single, rational idiom. He called it Volapük, or "world-speech." He based its words on English roots, using a simplified phonetics that eliminated the sounds th and ch, and replaced the letter r (difficult for the Chinese) with the letter l. These changes made many of Schleyer's new words hard to recognize. You could, for example, look at the word flen for a long time and not guess that it was derived from the English friend; even if you knew that flen means friend, you would be unlikely to guess that Flent was the new word for France.

    Even so, Volapük was a vast improvement over the other universal languages available at the time. These ranged from the "philosophical language" of John Wilkins, in which each letter stood for a distinct concept, and the meaning of a word was—in theory—evident from its spelling, to Jean François Sudré's Solresol, a language based on musical scales, which, although almost impossible to speak, could be whistled or played on the trumpet. Ordinary people could both speak and understand Volapük, and many of them soon did.

    Nine years after Schleyer published his grammar, the language had a quarter of a million speakers; some accounts put this number as high as a million. Volapük primers were printed in 21 languages, and the dictionary had grown from 2782 to more than 20,000 words. At the Third Volapük International Congress, held in 1889, everyone spoke Volapük, even the porters and the waiters. There were Volapük societies from Sydney to San Francisco, at least 25 Volapük periodicals, including the Cogabled ("Jest Book"), which printed nothing but Volapük humor. The language was so popular that many people considered the question of universal communication settled once and for all. An English scholar named Alexander Ellis, in a report to the London Philological Society, concluded that "all those who desire the insubstantiation of that 'phantom of a universal language' which has flitted before so many minds, from the days of the Tower of Babel, should, I think, add their voice to the many thousands who are ready to exclaim lifom-ös Volapük, long live Volapük!"
    Well, it almost caught on, obviously. If everything had worked out a little better in the long run, I might have been writing this in Volapük today.
    [ | 2006-07-17 13:20 | 2 comments | PermaLink ]  More >


    Tuesday, May 30, 2006day link 

     Chicken and egg problem solved
    picture CNN:
    It's a question that has baffled scientists, academics and pub bores through the ages: What came first, the chicken or the egg?

    Now a team made up of a geneticist, philosopher and chicken farmer claim to have found an answer. It was the egg.

    Put simply, the reason is down to the fact that genetic material does not change during an animal's life.

    Therefore the first bird that evolved into what we would call a chicken, probably in prehistoric times, must have first existed as an embryo inside an egg.
    I'm so glad we got that sorted out. Did they get paid for figuring this out? So, where did that first egg come from? It is a non-sensical and misleading question, as the chicken and the egg are part of the same system. Even without that, their logic is flawed. It is a lot more likely that some animal by some evolutionary accident will depose some of its genetic material somewhere in such a way that it can grow into a new animal than it is that a fully programmed egg somehow happened. Eggs don't do very much on their own. Anyway, I'm already confusing myself, so I should probably have stayed with the thought that it isn't a valid question.
    [ | 2006-05-30 23:42 | 26 comments | PermaLink ]  More >


    Tuesday, May 9, 2006day link 

     The path to the Metaverse
    picture 10 years ago it looked to me like the net ought to be going 3D within a couple of years. You know, Cyberspace, where you could fly around in landscapes of data somehow, or virtual realities. But nothing much really happened, other than that the virtual worlds that are there have added more features. But now some people are calling the vision for the 3D web "The Metaverse" and my friends at the Accelerating Studies Foundation have organized a Metaverse Roadmap Summit. Article from cNet here.
    PALO ALTO, Calif.--With the spread of online games, virtual worlds and services like Google Earth and MySpace.com, people may soon be spending more time, communicating more and shopping more in complex 3D Web environments.

    That's why several dozen of the most influential figures in video game design, geospatial engineering, high-tech research, software development, social networking, telecommunications and other fields gathered here Friday and Saturday for the first Metaverse Roadmap Summit.

    The proliferation of social networking sites, online games and virtual worlds may lead to complex 3D Web environments.
    Bottom line:

    The notion of the so-called metaverse attracted dozens influential tech figures to conceive a path to an Internet dominated by 3D technology, social spaces and economies.

    The event, held at the SRI International and produced by the Acceleration Studies Foundation (ASF), was the initial step toward what organizers and attendees alike hope will be a coherent path to the so-called metaverse--an Internet dominated by 3D technology, social spaces and economies.

    As such, the invite-only group spent the two days in a series of talks, small breakout discussions and group presentations--all in the pursuit of consensus about what the metaverse, or some would say 3D Web, will look like in 10 years.

    In the end, organizers will sift through hours of recordings of the various discussions and plan to produce a public document by the end of the summer that will lay out what they believe were the overriding conclusions and directions of the event. First, though, attendees will pore over two drafts of the document in the coming months to weigh in on the organizers' take on the so-called road map.

    Ultimately, the ASF hopes to produce regular small Metaverse Roadmap gatherings, as well as full summits at least every two years.

    In the meantime, the organizers have their work cut out for them because agreement about the metaverse of 2016 was hard to find.
    Hey, I want it, so you've better agree on something. For that matter, I want more than three dimensions. That's already too little in the real world. So whereas the 2D desktop metaphor we've been stuck with in computers is certainly inadequate, it isn't necessarily terribly much better if it gets expanded to be 3D cities with buildings and rooms with stuff in them. I'd like at least 5D please.
    [ | 2006-05-09 23:12 | 12 comments | PermaLink ]  More >


    Saturday, April 8, 2006day link 

     Web2.0
    "Web2.0" is one of the hot buzzwords right now. But a fuzzy term that a lot of people seem to dislike, because, well, it is a buzzword, and there's not wide agreement on what exactly it is, or whether it really is something new. But largely it has something to do with a new breed of websites that have more sophisticated user interfaces, particularly ones that use Ajax to update stuff on the page without having to reload it. And it has something to do with engaging large numbers of people in contributing content and in adding value to existing content. And it has something to do with web services, like RSS feeds. I.e. standardized ways one can access stuff, no matter where it comes from. And thus that new possibilities open for creating "mashups", i.e. new combinations of data from various sources. For example, Flickr is a photo sharing site, and it makes it easy for you to show those pictures in all sorts of settings other than their own site. GoogleMaps allow you to create maps based on their data, putting your own stuff on the maps.

    Dion Hinchcliffe is one of the most articulate proponents for Web2.0, providing ongoing updates on his blog on where it is at. Like, see his recent State of the Web2.0. From there, a little overview of what it IS:
    For those who don't follow it all the time, it might even be hard to remember what all the pieces of Web 2.0 are (and keep in mind, these elements are often reinforcing, so Web 2.0 is definitely not a random grab bag of concepts). Even compact definitions are sometimes a little hard to stomach or conceptualize But the one I like the best so far is Michael Platt's recent interpretation just before SPARK. Keep in mind, the shortest definition that works for me is that "Web 2.0 is made of people." However, it's so short that important details are missing and so here's a paraphrase of Platt's summary.

    Key Aspects of Web 2.0:

    - The Web and all its connected devices as one global platform of reusable services and data
    - Data consumption and remixing from all sources, particularly user generated data
    - Continuous and seamless update of software and data, often very rapidly
    - Rich and interactive user interfaces
    - Architecture of participation that encourages user contribution

    I also wrote a review of the year's best Web 2.0 explanations a while back and it goes into these elements in more detail if you want it. But there's a lot more to Web 2.0 than these high level elements would indicate. A key aspect not mentioned here, though I cover it in Sixteen Ways to Think in Web 2.0, is the importance of user ownership of data. The centrality of the user as both a source of mass attention (over a hundred million people, probably 2 or 3 times that many, are online right now) and an irreplaceable source of highly valuable data, generally encourages that the user be handed control of the data they generate. If control over their own attention data is denied them, they will just go to those who will give them that control. This gives some insight into the implications of Web 2.0 concepts, which were mostly gathered by examining prevailing trends on the Web. Forrester is calling the resulting fall out of these changes Social Computing and it'll be interesting to see what the effects of the widepsread democratization of content and control will ultimately be a generation from now.
    From the comments to Dion's blog posting, it is obvious that there's a lot of disagreement. About half of them seem to think that Web2.0 is a useless buzzword that just muddles everything. But some of them are also helpful with definitions. Nathan Derksen:
    "Web 2.0 is comprised of applications that use sophisticated user interfaces, that use the Internet as an operating system, that connect people, and that encourage collaboration."
    OK, that's simple and clear. Or, to give an idea of where it came from, from Varun Mathur:
    On April 1st, 2004, Google launched GMail, which went on to ignite the whole Web 2.0 / AJAX revolution which we are witnessing right now. There is no agreed definition of Web 2.0. I like to think of it as the re-birth or second-coming of the web. The Web 2.0 websites are more like web applications, and have a rich, highly interactive and generally well designed user interface. They could also be using web services offered by other sites (for eg, Google Maps, Flickr photo web service, etc). Syndication and community are also associated with a site being Web 2.0. AJAX is the technical term which is responsible for the increased interactiveness of Web 2.0 websites. But the fundamentals remain the same - what's under the hood of a Web 2.0 application is as important as it was a few years ago.
    OK, it seems that it is part Collective Intelligence and part more lively user interfaces. It is about creative engaging, immersive websites that form open communities. Not communities as in a member forum, which is something that has existed for a number of years. But community with less barriers and boundaries, where one rather freely can both contribute and consume lots of stuff in real time.

    And, yes, maybe nothing very obviously new, but rather what the web was supposed to be all along. But in a more bottom-up and pragmatic kind of way. Being allowed to contribute and share more widely, and have somewhat uniform access to the contributions of many others, but without very many restrictions being imposed on y ou. An evolution, rather than a revolution. But it seems that collective intelligence becomes more visible and more a target, which changes things.
    [ | 2006-04-08 23:44 | 17 comments | PermaLink ]  More >


    Saturday, February 18, 2006day link 

     'Sleeping on it' best for complex decisions
    New Scientist:
    Complex decisions are best left to your unconscious mind to work out, according to a new study, and over-thinking a problem could lead to expensive mistakes.

    The research suggests the conscious mind should be trusted only with simple decisions, such as selecting a brand of oven glove. Sleeping on a big decision, such as buying a car or house, is more likely to produce a result people remain happy with than consciously weighing up the pros and cons of the problem, the researchers say.

    Thinking hard about a complex decision that rests on multiple factors appears to bamboozle the conscious mind so that people only consider a subset of information, which they weight inappropriately, resulting in an unsatisfactory choice. In contrast, the unconscious mind appears able to ponder over all the information and produce a decision that most people remain satisfied with
    Not a surprise, is it? But I guess it is good that science comes to terms with it. Particularly since science is largely a conscious mind activity.
    [ | 2006-02-18 14:13 | 15 comments | PermaLink ]  More >


    Tuesday, January 10, 2006day link 

     Agora and Antigora
    Jaron Lanier: The Gory Antigora. A brilliant essay about the net. Like how we both find examples of the Agora, the ideal democratic collaborative sharing space, and what he calls the Antigora, where somebody mangages to set up huge, efficient profit-making machines built upon the ownership of their proprietary core. And how we in many ways seem to need both, and one builds on the other, in ways that sometimes are rather invisible.

    He also laments how we lock ourselves into paradigms that aren't necessarily the best, but that become very stuck. You know, stuff like "files" and "desktops", and the ways we make software, which remains, as he calls it, "brittle". We still make software based on principles that mean it either works more or less 100% or it doesn't work at all. Which makes it all rather fragile, hard to change, and requiring lots of invisible unpaid work at the periphery to make it appear to be working. If you actually accounted for the work people spend in trying to keep their windows computers free of viruses, or trying to solve dumb problems with their software, it would add to up to being outrageously ridiculously expensive. Which it is. But it is still being used because a lot of people voluntarily make up for the gap between what it is supposed to do and what is actually going on.
    There is no recognition for this effort, nor is there much individual latitude in how it is to be accomplished. In an Antigora, the participants at the periphery robotically engage in an enormous and undocumented amount of mandatory drudgery to keep the Antigora going. Digital systems as we know how to make them could not exist without this social order.

    There is an important Thoreau-like question that inevitably comes up: What's the point? The common illusion that digital bits are free-standing entities, that would exist and remain functional even if there were no people around to use them, is unfortunate. It means that people are denied the epiphany that the edifice of the Net is precisely the generosity and warmth of humanity connecting with itself.

    The most technically realistic appraisal of the Internet is also the most humanistic one. The Web is neither an emergent intelligence that transcends humanity, as some (like George Dyson) have claimed, nor a lifeless industrial machine. It is a conduit of expression between people.
    And that is sort of the conclusion. It is really not about technology or economics, it is really all about culture and the playing of an infinite game.
    [ | 2006-01-10 22:55 | 20 comments | PermaLink ]  More >


    Friday, January 6, 2006day link 

     Dangerous Ideas?
    picture Edge asks the Annual Question to a bunch of smart people. Last year they asked "What do you believe is true even though you cannot prove it?" Here is what I wrote about that. A great question. But, in short, it was a bit surprising how narrow-minded a lot of the answers were. This year the question is "What is your dangerous idea?" An equally great question, trying to inspire people to give their outside-the-box thinking, their most potent ideas that might change everything. At least that's how I would like to define "dangerous" in this context. Something that can upset the status quo catastrophically, but in a good and interesting way. Not all of them use it like that.

    A lot of the answers are interesting in various ways. But most of them are not very dangerous. They stay within very safe territory for scientists. And, actually, the underlying subtext is the same as last year for a lot of them. It is obvious that for a lot of these guys THE most dangerous ideas in the world are Religion, God and Consciousness. Meaning, they bend over backwards to insist that it is insane to believe in a God, and that it is a hopeless fantasy to imagine that you actually exist, as anything other than some chemical processes in a brain. And that what we really ought to accept, if we thought it through properly, is that everything is the result of unconscious evolutionary processes, we have no free will, and life is without meaning. Great.

    There's a certain kind of circular reasoning that many materialist scientists suffer from, which is similar to religious reasoning like "God exists because the Bible says so, and the Bible is true because God wrote it." But here you find it in versions like John Horgan mentions in connection with his idea "We Have No Souls":
    "In his 1994 book The Astonishing Hypothesis: The Scientific Search for the Soul, the late, great Francis Crick argued that the soul is an illusion perpetuated, like Tinkerbell, only by our belief in it."
    You know, like, "You don't really exist, you just think you do!". To some people that sounds really clever, and there's no Logic 101 that will make apparent the craziness of such argumentation. Who's the "we" who have beliefs? Who's the agency that wonders whether it exists or not? Who is it that is unsure whether it has free will or not? It is just an illusion? Just a chemical reaction in a brain? Who's concluding that? Is it just turtles all the way down?

    There are, however, some nice entries from people who don't just fall into the same circular reasoning trap. Like, Rudy Rucker, with the idea "Mind is a universally distributed quality"
    Panpsychism. Each object has a mind. Stars, hills, chairs, rocks, scraps of paper, flakes of skin, molecules — each of them possesses the same inner glow as a human, each of them has singular inner experiences and sensations.

    I'm quite comfortable with the notion that everything is a computation. But what to do about my sense that there's something numinous about my inner experience? Panpsychism represents a non-anthropocentric way out: mind is a universally distributed quality.

    Yes, the workings of a human brain are a deterministic computation that could be emulated by any universal computer. And, yes, I sense more to my mental phenomena than the rule-bound exfoliation of reactions to inputs: this residue is the inner light, the raw sensation of existence. But, no, that inner glow is not the exclusive birthright of humans, nor is it solely limited to biological organisms.

    Note that panpsychism needn't say that universe is just one mind. We can also say that each object has an individual mind. One way to visualize the distinction between the many minds and the one mind is to think of the world as a stained glass window with light shining through each pane. The world's physical structures break the undivided cosmic mind into a myriad of small minds, one in each object.
    There are some folks who actually can engage in a bit of self-criticism as scientists, and think about where scientific beliefs really come from. Like Marcelo Gleiser in "Can Science Explain Itself?":
    What if this is all bogus? What if we look at science as a narrative, a description of the world that has limitations based on its structure? The constants of Nature are the letters of the alphabet, the laws are the grammar rules and we build these descriptions through the guiding hand of the so-called scientific method. Period. To say things are this way because otherwise we wouldn't be here to ask the question is to miss the point altogether: things are this way because this is the story we humans tell based on the way we see the world and explain it.
    Or, Thomas Metzinger, in "The Forbidden Fruit Intuition":
    Is there a set of questions which are dangerous not on grounds of ideology or political correctness, but because the most obvious answers to them could ultimately make our conscious self-models disintegrate? Can one really believe in determinism without going insane?
    Some present the revolutionary idea that scientists might just need to actually catch up to what science already has established, like "Carlo Rovelli" in "What the physics of the 20th century says about the world might in fact be true". You know, if quantum mechanics actually were how we experienced the world to work, rather than just some bizarre math equations.

    Stephen Kosslyn goes the furthest in "A Science of the Divine", to present a way to reconcile science and religion:
    Here's an idea that many academics may find unsettling and dangerous: God exists. And here's another idea that many religious people may find unsettling and dangerous: God is not supernatural, but rather part of the natural order. Simply stating these ideas in the same breath invites them to scrape against each other, and sparks begin to fly. To avoid such conflict, Stephen Jay Gould famously argued that we should separate religion and science, treating them as distinct "magisteria." But science leads many of us to try to understand all that we encounter with a single, grand and glorious overarching framework. In this spirit, let me try to suggest one way in which the idea of a "supreme being" can fit into a scientific worldview.
    There's a surprising entry from Michael Nesmith, you know, from "The Monkees", who eloquently argues that "Existence is Non-Time, Non-Sequential, and Non-Objective", and I think I agree.

    Several people talk along the lines of Andy Clark's "The quick-thinking zombies inside us" about how most of our decision making, our "free will", really happens at a sub-conscious level, in ways we don't at all understand, so we fool ourselves concerning how much in control we are.

    Several people argue for free market economies. Get governments out of the way, and let the invisible hand of the free market sort things out.

    Which is an underlying theme here. Humans having figured out that there are complex mechanisms that make things happen. Complex mechanisms that make our decisions. Complex mechanisms that carry on the evolution of life. Complex mechanisms that make economies work. More complex than we simple humans easily can understand. More smart and efficient than any of us consciously can be.

    But at the same time we here have a number of well-respected big names in science who claim that they've understood all of that well enough to conclude decisively that these complex mechanisms that are smarter than us aren't intelligent at all. They're just simple random processes with no meaning or purpose or intelligence, that came together completely randomly for no good reason. Confused? Well, you should be. You need to become good at circular reasoning to explain that away.

    The real dangerous idea, which most people with scientific credentials apparently are afraid of thinking, is, in my words:

    Life the universe and everything is all one system, which is self-organizing, intelligent and eternal. There's no outside to it. Nothing is separate from it. Whatever happens inside of it happens because it is in its nature to happen. It has no outside meaning, but it can create meaning. Its latent qualities might or might not get expressed, but when they do, it is because they're there. So, if something finds itself having self-reflective consciousness, it is because the whole system possesses the potential quality of self-reflective consciousness. Duh. If oxygen and hydrogen mix and become water, that's because they already had the property of being able to do so. If something evolves, it is because the system knows how to evolve. If something is alive, it is because the system is alive. If somewhere in the system time and space exists, and at some "time" a scientist evolves and he decides that he has understood it all and it is all really dumb and random and meaningless and consciousness only exists in brains, except for that it doesn't really exist, well, he's right, makes no difference. It is all natural. Luckily it isn't that scientist, or some guy with a grey beard on a mountain, who's responsible for keeping the whole system working, or it really wouldn't last long. The whole system is much smarter than any brain that comes along at some point and has a short-lived fit of self-importance. Doesn't matter what you call it. You can call it God, or Universe, or Physics, or Nature, Evolution or Mind or Consciousness. It is you, buddy! If you think not, you've become a bit confused by derivatives of your own abstract thoughts. Take a step back and touch Reality. Be conscious. Be very conscious! But don't get cocky. That little point of self-reflective awareness that you identify with, and which is enough to spin yourself into circles, is way, way, way smaller and more ephemeral than the big you who is all of existence, all of evolving spacetime, any dimension, any physical phenomenon, any potential phenomenon, all simultaneously, all forever. It is a lot smarter at running things than your little localized conscious focal point. You're not in control. But if you catch a ride on natural law, and go with the flows, you can go far, very quickly. Because the system works really well. It is self-regenerating. It is open source.

    Well, that was my rant. But that maybe doesn't give you anything very practical to do with it. A truly dangerous worldchanging idea would be a meme you let loose, and it just breaks down the old fixed structures, and it guides the self-organization of something new and better. They don't come along all that often, but when they do, it doesn't really matter much what you think about it, as it pretty much happens by itself. It might be time for some ideas that actually change how we perceive ourselves and the world, where nothing will be the same again.

    Lots of people have commented on the Edge dangerous ideas thing. Like, I just noticed Dave Pollard's Blinded by Science. He wasn't very impressed either with the dangerousness of those ideas, and he has some alternative suggestions.
    [ | 2006-01-06 00:49 | 31 comments | PermaLink ]  More >


    Tuesday, June 28, 2005day link 

     Time Travel
    picture Article last week: New model 'permits time travel'. Researchers are trying to find models for time travel that avoid the good old paradox, of what would happen if you went back in time and stopped your grandfather from meeting your grandmother, or you killed your dad before you were conceived. Which would make it sort of impossible that you're still there, then. So, it has been used to sort of prove that time travel is impossible. Now, these new thoughts don't sound terribly much better. Even scientists have a hard time wrapping their mind around time. So, let me help them out a little bit. Here's how I think it works.

    Time is a dimension, just like the 3 dimensions of space. There probably are more dimensions. String theory seems to predict something like 12 dimensions. So there's most likely more directions to move in than the 4 of spacetime. But there certainly are those.

    Despite time being just another dimension, we seem to be wired to perceive it as a one-directional stream. Something happens, and then it gets frozen in the past, as the history that leads to this point. And the future hasn't happened yet. So, both the past and the future seem equally inaccessible to us. But that's really just the fault of our perceptual wiring. Doesn't actually have much to do with Reality with a big R, the stuff that's really there. It is just the reality that we construct from our perceptions, and our abstract conclusions about those perceptions, which keeps time in that format.

    However, we can learn something about time as a dimension by studying the dimensions of space, which we perceive ourselves moving more freely in.

    Let's say I'm standing on the town hall square, and then I walk down to the train station. I can remember that I was at the town hall square. Now, if I walk back to the town hall square, will I catch myself standing there? No, of course not. Not even if I do it really quickly. That's logical for us in space. But really it isn't all that different with time.

    Our lives are objects in 4 dimensions, at least. Even if you take just your body. It has length, width, breath, and it has an extension in time. It extends backwards in time from where you are now, and forward in time to its death. All of that sort of sticks together. It might change in various ways, but it does have a certain coherence.

    Let's take a mobile item with some spatial size, like a bus. It might be 15 meters long, but it sticks together. If the front end moves, the back end moves with it. Whatever influences the front has an influence on the back, and vice versa. It won't be entirely the same, but it will be connected. If the front end stands in the sun, the back end will get warmer too, even if it doesn't. If somebody puts a sign on the front, saying that now this is a school bus, then the meaning of the back will suddenly change too, even if nothing else changed about it. Your life is kind of like that too.

    Good actors will prepare for much more than the role they're asked to play. They will sit and write down what they think the past history is of this character. They will construct events in his or her life that might have made him what he is. They will construct emotions from past experiences. They will decide where he's going, what he wants to do, and why. And that creates a much more full character in the present.

    The past is where you're coming from - the path and experiences that add up to who you are now. The future is where you're headed - the path and the experiences you'll go through if you continue as you are now, and with the history you've had. In principle both that past and that future are changeable. If you suddenly change, you'll need a different past history to explain it, and you'll suddenly be pointing towards a different future.

    But it is difficult to suddenly change, because there's a lot of inertia in all this stuff. There's a considerable weight in your past history. It adds up quite convincingly to explain who you are now. The most likely way of changing it is to re-interpret it, and get a different meaning out of it. But it would also just have to change, if you really change.

    All of it is quantum probabilities, so it is really a lot more moldable than what a 10ton bus seems like to us. It seems so solid. But really it is all the holodeck. It can be whatever you want it to be. We just have so much invested in all the stuff we've observed and what we've concluded it is, so it is very hard to just change it arbitrarily, and observe something else. But there's nothing that makes it impossible.

    Anyway, back to time travel. My life is like this bus that is moving. It sticks together, but it sometimes changes. The lives of everybody connected to me are also all intertwined.

    So, let's say I go back in time with the intention to meet my dad, and stop him from meeting my mom. But the problem is that they're part of the back end of my bus. If I move, they move with me. They're not standing back at the bus depot any more, because the bus moved on. Just like I'm not standing back at the town hall square when I go and check. Because I moved. Duh.

    In part because of our language, we often make the mistake of assuming that a place is the same, even as time changes, or other things change. You know, this is my office, and I can talk about my office yesterday, or last year, and in my mind I tend to think it is the same office. I believe it is the Hopi that have a language that doesn't do that. My office yesterday is not "here", it is "over there". Over there in yesterday. And they probably have a point. It isn't the same office. It looks a good deal like it, but it is different, the time is different, and a lot of the sub-atomic particles are probably different. We short-circuit our logic when we fall for the misconception that places and times are "the same" when they really aren't.

    So, if I go back to where my father met my mother. Well, we could say that they no longer are there, because they moved on, got married, got me, etc. So, if there's only one of each of them, they're no longer there. The bus left. If you wanted to change things, you've have to catch up with the bus where it is.

    Or, are they still there? See, that gets us into the more fantastic subject of parallel realities and multiple versions of ourselves. The funny thing is that most people intuitively would expect that if time travel were possible, they would go back and find exactly what "was there". I.e. your dad and mum exactly as they were, ready to have their first kiss at the drive-in, playing "Rebel without a Cause". But that requires that we keep existing at all times and places we've been. That there are a zillion you's stretching far back, and probably forward.

    That's entirely possible, that it works like that. That the past isn't just a memory in your brain, but it is real, it is there, it is alive.

    So, let's say they still are there. They're, however, also still connected with you, as part of a probable past that let to you being born. And, incidentally, to you getting around to go back in time at some stage in your life. It is all connected. Not two separate events, you and them. Rather, part of one bigger spacetime event.

    One possibility is that you might find that the past event has already changed from what you thought it was. The later events in their and your lives might have redefined what really happened. Maybe the original event was a happy kiss in the drive-in. But later on they got divorced and got therapy and decided it all was different. Like, he was really a brute who raped her. So maybe that's what you'd see if you went back. Because the whole gestalt is connected backwards and forwards.

    But then we have to touch on the possibility of what happens if the past changes.


    We're talking about objects in 4 or more dimensions. A bit like the screenplay of a movie. It is all connected, the characters, the timeline, the events, the climax. If we change one part, we'd have to change others. If we rewrite the script a little bit, and make the main characters meet in Paris rather than in Rome, then a bunch of things will change. Their kids will speak French rather than Italian, etc. If you had already shot the movie in the first version, then the next version will be a different movie, even if the story is similar on many points, and some of the characters are roughly the same, and even if it has the same name.

    Quantum physics seem to say that things exist if somebody has perceived them in a particular way. If they haven't been perceived, it is uncertain what is there. Could me many things, but the reality hasn't been frozen. So, if nobody was there to hear it, we don't know if the tree fell in the forest and made a sound. We don't know if Schroedinger's cat is alive or dead, unless we look.

    So, my past is a certain way because I perceive it to be so. Not really that I *have* perceived it that way. More that I'm doing it now. If I stop perceiving it, it might go back to uncertainty. Or if I succeed in perceiving it differently, it becomes something different.

    You'll notice that science skips rather quickly over the mystery of who it is who perceives things, as consciousness isn't a popular subject for people in the material sciences. So they usually just talk about "measuring", rather than awareness or consciousness. But it is unavoidable. Things are real when there's somebody there who perceives them. It would all become a little more logical if we accepted that there were such a thing as consciousness, and that it probably is extremely basic to how the universe works.

    Now, what if you get to a fork in the road in your life? You might go left, you might go right. Your life develops differently, depending on which one you pick. Before you make the choice, the two possible futures are maybe equally probable. Once you make the choice, your future falls into a certain groove, and that choice becomes part of your past that explains how you got to where you are.

    But how about if you also took the other road from the fork? No, not the you who's here today, who went left. But there's maybe a parallel aspect of you who took the path on the right, and went on from there. There might or might not be. Depends on whether anybody's there to experience it. If there were a consciousness which found the right side path interesting, it might have turned from the realm of uncertainty into a reality.

    In your life there has been many forks in many roads. Possibly many or all of those turned into realities. One of which is the one you're sitting in now. They're related, at least by their common branching points, but they're otherwise different. The Flemming who decided to move to Rio and be a street performer would clearly be a different character than me, even though we might have a lot in common, and part of our history would be exactly the same.

    So, back to those time paradoxes. If you try to go back and meet your mom and dad, before they conceived of you, then several things might happen. If you focus on the path that is part of your past, you'll probably find difficulty in trying to interfere. That's not as mysterious as it might sound. It is a bit like trying to spin around to see oneself from the back. No matter how fast I do it, I'll miss, because my back moves at the same time as the front. So, the same with the characters in your past. You might be surprised to find that they just walked out before you walked in. Because they're connected with you.

    The difficulty of doing so depends a bit on how long the tail is. You can't see yourself from the back by spinning around. But if you were a snake, you could grab your own tail, because there's enough dimension there. The front of the bus can't see the back, but a litte train with many wagons maybe could. So, in time travel, you might have trouble getting to meet a recent version of yourself. But if you go far enough back, it isn't so hard. So, maybe it is not any great problem to go back and meet your parents.

    But remember that there's a considerable inertia towards them doing roughly what they did, because there's already a well-perceived future reality in front of them, which they're connected with. There's a certain pull in that.

    But maybe if you exert enough energy, you might make something else happen. You manage to throw a Molotov cocktail into their car, and they burn to death. Then what happens? Really just that you created a fork. Your parents would still have met in that drive-in, and you don't suddenly disappear. But there's now another timestream which develops differently. It will include a few hundred people for whom the mysterious tragedy in the drive-in in '62 happened, with the stranger who blew up a young couple, who claimed he was from the future, and was carrying strange electronic devices in his pocket. And that reality develops differently from the one you knew, in small or big ways. Might be huge, with time travel technology being then discovered in 1962, from somebody studying what you had in your pockets, or it might just be a minor ripple, and it becomes a world very similar to ours, just without your parents, but with you as an older criminal. And back here in our reality, we'll wonder why you never came back, but everything will otherwise continue as normal.

    So, there is no paradox. There are just potential paths, and actual paths taken. Sometimes multiple parallel paths might be taken. Sometimes you might go back and take a different path. None of which changes that all of the paths, that somebody perceived themselves taking, were real.

    Back to one little detail. That article mentioned that of course we aren't seeing any people who suddenly disappear because their past changed, so obviously that isn't happening. I wouldn't be so fast on that. OK, forks might happen in the past that lead to different realities. But this reality might also change. It is just that everything in it is connected and relatively consistent with each other. Remember, it is a 4 or more dimensional "object". So, if somebody succeeds in changing something about it, everything connected to it changes. Like, the whole history of why things are the way they are, and everybody's memories of what happened, which explains things. And that is probably hard, because there's a lot of inertia, and a whole lot of things to change. But probably possible, if enough energy is exerted. So if your neighbor across the street got written out of the story, it wouldn't be that his house suddenly, poof, evaporates, and everybody stands there wondering what happened to him. No, you'd be quite sure that there had never been a house on that lot, and you'd have no recollection of anybody with that name, and it would all be very logical, and everybody could confirm each other's stories. I'd say that stuff like that probably happens once in a while, but you most likely wouldn't have noticed, except for maybe an odd feeling that something was off, which you couldn't put your finger on. If you rely only on your perceptions and your memories, nothing would reveal that anything changed, because they would have changed at the same time.

    I have little doubt that we'll figure it all out eventually, and that there's somebody somewhere who already can travel freely in time and space. See, it doesn't matter if it takes a million years to develop the technology, because that's just "right over there" in spacetime. And it is probably unavoidable that the spacetime multiverse becomes one big subway system, if it isn't already.
    [ | 2005-06-28 17:32 | 31 comments | PermaLink ]  More >


    Thursday, June 9, 2005day link 

     Corporate Fallout Detector
    picture James Patten:
    The Corporate fallout Detector scans barcodes off of consumer products, and makes a clicking noise based on the environmental or ethical record (selectable via the "sensitivity" switch) of the manufacturer. It explores issues of corporate accountability and individual choice. Due to increasingly complex global supply chains, a single product we buy may contain parts made by various companies all over the world. We may agree with the business practices of some of these companies, while not with others. The complexity of the relationships between manufacturers can be so great that it becomes unclear how to translate our personal convictions into good buying decisions, and all purchasing decisions involve an unavoidable element of risk. For example, a consumer may know that one company has a good record on human rights and pollution, but that company may be owned by another company that has a poor record in these areas. When one buys from the smaller company, the parent company also benefits. In this case, what should a consumer do to reward good business ethics? One can argue either for buying or boycotting products from the smaller company.

    The maze of corporate ownership makes it difficult for consumers to reward good business practices or punish bad ones by changing their buying habits. The products on the shelves in a store look more or less the same whether they were manufactured using child labor, or they increase pollution, etc. These aspects of products are invisible and difficult to understand. In this sense, these aspects are like nuclear radiation (invisible, dangerous, complex), which is part of the reason I designed the Corporate Fallout Detector to look and sound like a Geiger counter.

    Yeah, it isn't simple. Would be nice if it was as simple as a red light or green light. But really the thing is that we need to be better informed. We need better ways of visualizing complexity, so that we'll bother paying attention to it.

    I don't know how real that geiger counter thing is. It is a couple of years old and have been displayed at some festivals and gotten some mentions, but it is probably more like an art project, meant to call attention to the issue.
    [ | 2005-06-09 01:49 | 24 comments | PermaLink ]  More >


    Thursday, April 14, 2005day link 

     Negative or Paradoxical Strategies
    picture Antony Judge writes a paper about Liberating Provocations. You know, the "rational" approach, if you want somebody to do something that is good for them, has usually been considered to be to positively promote constructive behavior. I.e. tell them why it is good for them, outline all the advantages, provide useful information, encourage them. It is just that a lot of the time that doesn't work at all. Lots of people do the opposite of what they're supposed to. So, one could go a different way altogether and do the opposite. Promote the negative behaviors. Act surreal and start a campaign for doing all the wrong things. Get the government to support them loudly.
    This is a two-pronged strategy. By advocating a "negative" approach, those resistant to being told how to behave would reactively consider a "positive" approach. Those scandalised by the "negative" approach, would invest their energy in "positive" campaigns -- where previously they would not have been engaged.

    We are all familiar, from earliest childhood, with the response to exhortation from those occupying the moral high ground. We either ignore them or consider interesting ways of doing the opposite. If we are told not to do something, then we consider doing it. If we are encouraged to do something, we consider doing the opposite. The point is made by Zoe Williams (Cannabis Comedown, The Guardian, 29 March 2005):

    "Thus, if you tell them things are dangerous, they will do them, and if you shrug and say "actually, it doesn't seem to do too much harm", they will do something else. Whole swaths of aberrant behaviour could be addressed with this in mind. Obesity, smoking, drinking, fighting, snowboarding and joyriding would all become terribly passé if the government were to become their advocates, particularly if prominent members of the government were to lead by example and take up dangerous activities in a high-profile way."

    This provocative approach is designed to communicate more effectively with those already acting inappropriately or those who are passive in the face of inappropriate action.
    Now, I'm not even sure if I want to buy the idea that we collectively want to make people do a certain list of good things and not do a certain list of others. Although a society of course needs some kind of list of things one ought not to do. I'd want it to be very, very small, though.

    What I'm more interested in, which Tony also brought up, is the angle of infinite game playing. In a finite game there's a set of rules and you're supposed to follow those rules to win, against some kind of opposition. In an infinite game, however, you play with the boundaries and you change the rules, in order to keep playing. A very different thing.

    Fixed rules about what you're supposed to do and not do will create a finite game. Obviously. It constrains people. And for it to be a game, different people will tend to take different sides. If some people make a finite game with the goal of making you not smoke, not use bad words, not watch porn on your computer, or whatever, well, that's a pretty dull game. The only way of making it half interesting is to play the opposing part. I don't know about you, but negative campaigns trying to tell me what to do or not to do gives me an instant compulsion to disobey. I don't always bother to follow it, but such a campaign obviously is doing the exact opposite of what it tries to do.

    OK, so a fixed game of compulsion or repression will quite naturally and automatically motivate a lot of people to do their best to do something else. It suddenly becomes important, and somewhat interesting. The opposite-game is limited too, but not quite as limited as doing what you're told.

    Limited games tend to make people do things they wouldn't do otherwise. Maybe do what you're supposed to, maybe follow the rules, or maybe what you're not supposed to do, specifically disobeying the rules. Which you might not bother to do that way if those particular rules weren't there.

    Unnecessarily limited rules can be harmful. I'd say that anti-smoking campaigns is probably one of the biggest killers is our society, probably responsible for millions of unnecessary deaths and many more millions living miserable neurotic lives. Because they present a very limited game. Either you do what we say or you die. Not much fun in it either way. There's hardly even two factions in it.

    Having a choice is fun. And if you feel free to make your own choices, changing the rules as you go along, you're probably playing an infinite game. The playing of infinite games defuse the power of a finite game. Which was an illusion in the first place, but one might not notice before one changes the rules.

    Carrying out unexpected paradoxical strategies might work, not just because people will do the opposite of what they're told, but because they give a hint of the joys of freedom of choice. It shows you that you don't have to do what you're told. You're free to not smoke, regardless of whether the government unwittingly spends a lot of effort on compelling you to do so, or not to do so. Which is roughly the same thing.

    The thing is that most people are quite capable at choosing the best option that is available, or a new option that previously wasn't available, IF they're not being held stuck in some kind of fixed for or against situation. Not surprisingly, most people will choose what they feel good about, if they have the choice. Or, rather, if they have ALL the choices. Because there are a lot more choices then two in life.

    That all seems very paradoxical to people who try to rule other people and condition them to do the right thing. That people are more likely to do the right thing if you don't force them, but rather allow them to move the rules around. And, for that matter, you have no business thinking you know what the right thing is for everybody. What people want is to have fun playing the game of life, and playing it as long and as well as possible, and they probably don't really want your stupid little game of following a rule that's known in advance.

    Oh, I probably went off on a tangent. Tony's article is superb and gives lots of good examples of provocative and surreal and perverse strategies and pranks that have worked well. Some very amusing ones, like the Cannibal Flesh Donor program, pornocracy, horses running for public office, etc. Humor is great, because it breaks the rules, at least a little bit. It makes people pause and see things a little differently. And that is what is needed. Not being for or against. Life is too short and too big to only use it for playing two-bit games. We need to keep evolving, in millions of different directions at the same time, if we at all are to have a chance. Good paradoxes have much more generative power than clearly stated goals that are handed to you.
    [ | 2005-04-14 23:59 | 23 comments | PermaLink ]  More >


    Tuesday, February 15, 2005day link 

     In the mind of an autistic savant
    picture From the Guardian:
    Daniel Tammet is an autistic savant. He can perform mind-boggling mathematical calculations at breakneck speeds. But unlike other savants, who can perform similar feats, Tammet can describe how he does it. He speaks seven languages and is even devising his own language. Now scientists are asking whether his exceptional abilities are the key to unlock the secrets of autism....

    Tammet is calculating 377 multiplied by 795. Actually, he isn't "calculating": there is nothing conscious about what he is doing. He arrives at the answer instantly. Since his epileptic fit, he has been able to see numbers as shapes, colours and textures. The number two, for instance, is a motion, and five is a clap of thunder. "When I multiply numbers together, I see two shapes. The image starts to change and evolve, and a third shape emerges. That's the answer. It's mental imagery. It's like maths without having to think."...

    Last year Tammet broke the European record for recalling pi, the mathematical constant, to the furthest decimal point. He found it easy, he says, because he didn't even have to "think". To him, pi isn't an abstract set of digits; it's a visual story, a film projected in front of his eyes. He learnt the number forwards and backwards and, last year, spent five hours recalling it in front of an adjudicator. He wanted to prove a point. "I memorised pi to 22,514 decimal places, and I am technically disabled. I just wanted to show people that disability needn't get in the way."

    Yeah, you're doing that alright. The mind is an amazing thing. Of course, what savants can do indicates that it ought to be possible for anybody, if you knew how. Unfortunately, however amazing it is that he describes what goes on as he does it, that still doesn't make it teachable. Because he doesn't use any clever formulas or anything. But he obviously uses a kind of synesthesia that works, without much effort.
    [ | 2005-02-15 15:50 | 13 comments | PermaLink ]  More >


    Friday, February 4, 2005day link 

     Tagwebs, Flickr, and the Human Brain
    picture Jakob Lodwick has an epiphany on tagging and Flickr and how the human brain works. OK, I'm not sure it really says anything new, but he explains, for dummies, what it is to tag your pictures, and why that's a really good thing, which just might tweak greater intelligence out of the net.

    Tagging is basically just that you can assign a category or keyword of some kind to some piece of data, like a picture. That is an example of metadata. That is, it isn't the thing itself, but it is something you say about it. Or which somebody else comes along later and says about it. And the cool thing is that if that is done in a reasonably standard way, all sorts of software and search engines can come along later and show a lot of previously hard-to-find connections, and they can group things together for you that have similar tags.

    That would be the Semantic Web. I.e. that instead of just a bunch of free-form text and pictures on millions of webpages, we tag things in more finegrained detail as to what it is. This is a name, this is a country, this is a movie, this is a quote, etc. If that was done with all the data on the net, amazing new things will be possible. But the trouble is that it is a lot of work, and not really much fun, to go over existing texts and add a lot of tags saying what it is. And then the trouble is how we agree on what the proper category structure is. If you call it "city" and I call it "town" and French people call it "ville", how can we group it together well enough. Those are hard problems that aren't sufficiently solved. In part because human language is fuzzy, and we all have different mind maps of how things should be organized ina perfect world. So, the semantic web hasn't really happened, and any examples of it tend to be kind of pathetic and not really useful.

    So, the tag thing, even though it is the same idea, sort of relaxes the tension and opens it up for instant use. I.e. you don't worry about the perfect ontology of categories. You just tag thing you care about, with whatever tags make sense for you. And smart programs come along later and try to make useful applications based on the tags they find.

    Hm, I've gotta make some of those.
    [ | 2005-02-04 16:15 | 25 comments | PermaLink ]  More >



    << Newer stories  Page: 1 2 3 4 5 6 7   Older stories >>