Keys, wallet, smartphone … things we can’t leave home without

“Students with smart phones study more often” according to this article  by Study Blue. Amazingly it reveals students are using smart phones to study away from a desk at home or in a classroom. This seems obvious, but fits with my downtime learner theory, in that people are learning in all sorts of places and times that traditionally fall outside of the ‘cells and bells’ notions of where learning takes place. It’s hardly earth moving, but at the same time hints that study is interrupted by personal use – as though learning is one activity and life is another.

Where ever you go in a city like Sydney, people appear to be ‘on the phone‘ but not holding to their ear. Finger swiping and tapping is predicated by learning the new language, new literacies and navigating though new types of media. Using one effectively is a world away from using a computer – and no ones suggesting people need formal training.

A recent Educause report looked at the potential of smartphones. However, if you  scan the questions and bar charts, it appears that the assumption here is that learning-content will in some way be shovelled into a course-app, rather than the course adopting mobile-apps that pre-exist. I can live with Study Blue’s determination that students are often checking email in their study – as that is also the most likely way that an institution is going to connect with them still. It’s the Henry Ford option – any colour you like as long as it’s black.

The problem seems to be the assumption that ‘software’ useful for academic purposes is either an LMS or some form of Microsoft Office generated document, rather than something like Evernote or Edmodo. And why not – there’s vast amounts of literature about such things and millions of dollars and hours invested in it. A recent study by Monash, found students preferred to communicate with lecturers via email (64%) from the options, yet when it comes to Internet use, only 7% of their activity was email use. It also focused on the device for conducting the use of social media (Facebook, MySpace [huh?]) or blogs and wikis (which are a different category) as either on a Mac or a PC.

Where is the correlation between smartphone access and computer access? 41% of students in the sample said they used multiple devices. I wonder what percentage of course content was designed for multiple devices and tuned into the way that people on smart phones use them? I suspect a tiny amount if any. This, to me, highlights the fact that study materials are overwhelming designed for print format (A4) and to a lesser extent laptop/desktop browsing. What about smart-phone applications or connectivist approaches to student-teacher culture participation? Why, after all we know, is it okay to preference PDFs, PPTs and Word Docs rather than find ways to allow teachers to develop this content, rather than buy it from Pearson?

Despite the clear changes in technology an social use of that technology – the paper orientated cognitive apprenticeship is still king. Content is treated as it is scarce enough that only a hand few of people can hand it out, focused on print, rather than better ways to tell a story, show people something new (ie showme app) or provide networked conversations – leading and facilitating though the craft of being a teacher first and foremost. It also assumes that what is said on paper, or in the classroom is correct. There’s plenty of studies to show that isn’t a universal truth.

The cost of developing ‘course-apps’ would be an extravagance and fly in the face of what people have learned to do with smart phones. Can you go into a shop and say ‘custom fit me a suite of apps please?’ – No, because that isn’t how smart phones work and everyone knows it. Developing course-apps would be like moving ten years backwards to a time where only a few instructional designers had the skills to make app-courseware. It ignores the fact that learning is now everywhere, in entirely new formats, designs and experiences, despite plenty of resources and plans to help implement mobile learning right now. All content takes time to create and I’m yet to meet a teacher who has time to develop what they see as the wrong type of content.

Downtime learners have figured out  how to create multiple useful processes to help them do an extra-ordinary number of things with smart phones. Very recently, the Independent Magazine reportedthe mobile is the remote control of your life, leaving home without it is as unthinkable as leaving your keys or wallet to most people now – imagine what it will do by 2025.

I think we’re past the point of wondering what smart phones are or how we can shovel courseware into apps. There’s a broad suspicion that eLearning hasn’t been as awesum on the PC as we were promised. Downtime learners don’t expect (or want) a single option-solution or have someone to explain it (training). They expect to be self-supporting via the network of people that it connects them to. Web2.0 was a mere twitch, not even a tremor for education – because it largely happened on a desktop (and was easy to regulate or ban). Learners were reliant on the blue-cable or wifi – and education was adept at controlling that experience.

The smartphone was the real earthquake in terms of radically rethinking software and how we use it. Even more significantly, we can easily tether our laptop slabs to our networks and bypass the often unreliable and locked down institutional networks entirely. Education seems to have totally missed this – or perhaps sees it as something fundamentally further out and of little significance.  A phone is a computer and a computer is a phone, but the way we interact with them are fundamentally different. We are now in a position to create a personal technology plan for students. If we are as people claim – the ‘student centered learning’ generation – then we have to also accept that smart phone centred learning is the preference of millions of people who learn constantly from each other though apps and mobile Internet access – but equally that smart phones and personal Internet connectivity is a luxury many don’t have – just like laptops. The question seems to be, which are they more likely to have, and which will become the predominant device. I seriously doubt it will be ground fibre and wifi to laptops and neither do CISCO.

There will be 788 million mobile-only Internet users by 2015. The mobile-only Internet population will grow 56-fold from 14 million at the end of 2010 to 788 million by the end of 2015.

The trend is this, the current use of smart phone and other mobile devices, provides experiences and usage patterns that didn’t exist a few years ago. They, like their users are not interested in retro-fitting, but working on well agreed growth rates, especially in video and the fact people expect to use multiple devices. Take a conference today – people take keys, wallet, iphone, iPad and Laptop. Why? if they did the same thing, there would be no point. The reality is that they do different things, and the media on them is different. There won’t be an optimal point of conversion, just more fragmentation, more choice and more expectation. What is clear, from industry reports is that video will be the most consumed media – not print or slide-decks – but co-created and distributed socially. But you know this right? It was predicted the 2008  Horizon Report – the same year we all got iPhones outside the US – and this year – mobiles once again are top of the list.

So what are you going to design for next year if not mobiles and video distribution over networks?

The internet is @@@@ing #####less

So I sat and wondered today as I watched another PowerPoint, how different things have become when it comes to even talking about places we communicate in. It’s non-cool these these days to use http:// or even double-u double-u double-u dot. Even email has become somewhat of a time stamp. Some people only have their work email address, and others have hotmail or yahoo. When we interact off line, our use of ‘place’ tells others something about us even though we might be aware of it. We may appear a dinosaur or a space-cadet it seems.

The @ symbol has changed it’s meaning, and I’m sure I’m not the only one who’s flicked someone my Twitter name and then been asked – is that  dot com? The point of my drawing is to show that even simple symbols have radically changed their meaning and like many symbolic languages, both the @ and the # have many meanings depending on audience, media and intention.

So it appears to me that establishing a #hastag today is as important as a domain – and reflects the increasingly simplicity we have to mark-out a point of reference online. A domain once represented a website – a repository of information, where as the #hashtag represents a community, and can be owned by everyone, not just the registrar. It is logical to me that organisations interactively manage a taxonomy of #hashtags and @accounts and that these things have a valid place on even printed stationary. But in doing that they also need to ensure that they raise their level of communication to meet the expectation of the audience. It’s only going to get more complex I guess.

Knowing isn’t the same as knowledge

Larry Johnson at the New Media Consortium (NMC) and Horizon Report has said more than once when asked about Second Life, that he has always seen it as a ‘now’ technology and in doing so is conscious that whatever they do there, needs to be transferable to some other technology that contains new opportunities. By this he means the objects are less important that what they ‘know’ about virtual worlds, they can afford to leave objects behind if the potential ‘knowing’ is greater. I call this holding your tools lightly — which is still problematic with teachers, who like to invest in heavy tools, that they a reluctant to change. You can’t blunder your way into making a virtual world successful, if you are reliant on knowledge as facts, or strategy as process – but as history has shown, millions of dollars can be wasted searching for the grail.

Anyone who has been involved in virtual worlds seriously will tell you that people bring with them knowing and knowledge —  ideas, facts, experiences and skills from other domains. While anyone with a modest amount of technical knowledge can set up a Minecraft Server and set about getting kids to do tasks that can be paraded as educational, to me that misses the bigger opportunity. By this I mean not stamping ‘education’ on it. Virtual worlds have important differences from other digital environments. John Seeley Brown said “While the architecture of these worlds is distributed across the Internet, the activities within these virtual worlds create a sense of shared space and co-presence which make real-time coordination and interaction not only possible, but a necessary part of the world.” This is the sense of ‘being there’ and most significantly choosing to be there with others, not being forced in an open-world such as Minecraft is, as JSB states “culturally imagined and the practices of the participants, their actions, conversations, movements, and exchanges, come to define the world and continually infuse it with new meanings”. If we look at Huizinga (the father of game research in many ways), he said that culture is the manifestation of play, not the reverse. So learning to play is not the same as playing to learn.

What Massively Mincecraft is  interested in is this idea of ‘living in shared practice’ and in providing that, liberating players from the experiences of everyday learning as ‘students’ – most obviously by re-defining their role in the world. We believe that doing this allows kids and parents to develop new practices though imagination though networked, collective action. We are interested in kids and parents ‘knowing’ rather than it being used to create knowledge both inside and outside the game. We are already seeing this, even in our youngest players. Rather than waste time defending it, my response is to ask doubters – how are you creating better ‘knowing digital communities’ and most specifically, if you are focused on knowledge (content) how do you know that what you do with technology around the edge of that is not just a novelty.

This is why Massively Minecraft isn’t about getting the game into a classroom per se, it’s about what kids and parents bring (or not) to the game-world, and what they take from it though collaboration, shared meaning and collective action. One parent told me that they went to see their kid’s teacher and challenged the results they had been given saying “clearly my child is capable of more than this, what are you doing that gives you this information as fact – as I can see this isn’t right”. Massively Minecraft isn’t just playing, it’s about creating roles for players in and out of the game, that perhaps they didn’t expect – and I see this growing everyday, and it is a good thing. To me playing Minecraft with teachers is a very useful experience for them … it’s promotes knowing in new ways.

The wisdom of flowers

James Surowiecki’s The Wisdom of Crowds argues that across a large and diverse group, the average response will be better and smarter than individual experts. However, researcher Andrew King commented “We are bombarded by other people’s judgments – from our friends, colleagues, and the media. Such a flood of information can result in a convergence of opinion, creating overconfidence (and inaccuracy). What our work demonstrates is that for accurate collective decisions, you either aggregate completely independent opinions, or copy successful individuals; anything in-between seems doomed to failure.”

I’ve been thinking about this, and the fact that ultimately it is individuals that teach, and for the most part want to feel that the decisions they make are in the best interests of those being taught – their family and their community. Just because it’s popular doesn’t mean it’s more than that. There are plenty of reasons to crowd source and plenty more to argue why it’s a dumb idea. Two examples of the latter –  art isn’t made by committee; great design isn’t made by consensus. They can be influenced, but at some point something is made by someone – the individual.

I was thinking about Steve Collis’ metaphor of an airline time-lapse as ‘thoughts’ moving around the world as he asked “What is the word thinking?”.

I decided, after 4 days of pondering, aircraft are too mechanical and operate within controlled parameters and rather predictable (or at least you’d hope so). I liked the idea though, but think that all this TeachMeet, unConferencing is more organic, more like Flowers. Many of flowers are designed to attract pollinators, are the product of coevolution with insects (and other animals) resulting in an efficient means of uniting sperm and egg. Their fruits are often designed to aid in the dispersal of their seeds.

We don’t live in a utopia, and much of the ‘fruit of web2.0’ is link-bait, designed not for the individual per se, but more to attract people to some sort of commerce, or influence them toward being more loyal to a dominant group or personality. Leaders often make the worse Tweeps, RT-ing, pontificating but not really having a conversation (pollinating).

To be a successful eco-system, education needs many types of flowers (digital and non) who use many different methods to attract pollinators. These require multiple mediums and methods. Flowers, for example us colours, nectar, and fragrances; radial vs. bilateral symmetry; incomplete vs. complete flowers; perfect vs. imperfect flowers; single vs. composite flowers. If there is little variation, or variety the species eventually fails.

There are few individual decisions to be made to either follow or not to follow a set pattern or route (being busy isn’t one of them). Firstly, being part of the crowd takes advantage of new technologies and mediums helps prevent self fertilisation, reduces or minimises energy needs and potentially attracts new pollinators.

Secondly, those designing social-digital mechanisms are able to include or exclude any medium or idea that doesn’t suit their intention. This necessitates ‘user’ adaptation and in so doing so creates a multitude of variables making efforts towards external validity difficult. Ultimately there are more variable adaptations at the individual level than there are ‘crowd solutions’, so connecting that to educational methods that demand (and assume) stability is possible, is almost impossible.The crowd does not create the pathways, it just uses them in the main, yet you’d think that everyone on the planet is interested in joining massive social mission to reform education. That is clearly ridiculous, a questionable adaptation from more effective uses of social media designs. For example, reviewing a book on Amazon is an individual action which becomes collectively useful to everyone – Amazon, the reader and the author. On the other hand, attempting to write a book in the same manner would become increasingly problematic and unlikely to be as ‘good’ as someone for whom writing is their art and craft. Collective writing will teach some people some things, but it won’t make anyone Neil Stephenson.

We return to the individual, and indeed this is what I began with. Attempting to copy successful individuals is a winning strategy (if we can find them and IF they are willing to engage with us). That I think is really problematic in the crowd, as those with the largest influence often don’t bother to engage with newcomers, unless they are buying something, or useful in helping them get to where they want to be. We are pack-animals that find it hard to escape evolution with or without our iPhones.

While the crowd can be used effectively, I am unconvinced that many of the essential pollinators want to do this universally , resulting in some flower species being unaware of what is growing in the other field. “Oh look, here come’s our favourite bee!”. There are those who simply don’t want to let their people move past their authority, and those who are creating new authority online … it’s a titanic struggle.

Take games for example. These are clearly an ICT by any definition and therefore should be subject to no more or less scrutiny than blogs, wikis etc., yet they seem to have to prove something even more in order to cross-pollinate with more popular, acceptable notions of what Web2.0 is. The things that makes all this connect are the bees. We simply don’t have enough people like Stephen Heppell who genuinely get involved with individuals and make a sustained effort to move around the eco-system – and have been doing for a very long time.

What I think students appreciate is when their teacher acts as an individual, but uses technology to connect to them in ways that pollinate ideas. I doubt they care too much how many Twitter followers their teacher has, nor should they. However a savvy teacher who knows the value of being connected, and models this behavior everyday in direct and subtle ways, to me is massive plus for a generation growing up with a constellation of  technology and those involved in their education. My individual bottom line – be visible else I cannot see you – in a medium of your choice.

Every flower has knows how to attract it’s most successful pollinators and pollinators are attracted to those flowers. Not all kids want it, not all teachers want it. However that doesn’t make it an absolute rule – and clearly subject to personal experience. To say “I would not use a game” simply because it’s a video game is like saying “I don’t like black ink, only blue” in another medium.

It’s more often individual rules that become supported by popular consensus, rather than knowing though experience. Pedagogy doesn’t not come before tools as an absolute rule, as some tools have embedded cultural meaning, uniqueness of themselves which creates an interplay all of their own – well beyond the classroom or gradebook.

The planet might not be thinking, but I suspect Mother Nature is which is a lot more magical.

There is room for all kinds of technologies in learning, but let’s not fall into the honey-trap that any top 10 list of link-bait is actually useful to the species or that any field represents a viable eco-system. It takes a lot of individual time and courage to deliver something and I take my hat off to anyone who makes the choice to do that – and not always follow the crowd.

Stick ’em with the pointy end

I’m off to the Teaching and Learning with Vision Conference at the end of the week, which I’m pretty happy about. Not least because for the most part, the people speaking and those going are visible in my time-line and have been for a while. This means before I go, I know that I’ll be learning from people who are not simply talking about it or promoting their book, but people who are actively doing things that I can see in my time-line. From an organisation view, all conferences benefit from a known-headline, and with Stephen Heppell (who’ve I seen many times) it certainly helps to attract an audience, as with Alec Couros and George Couros. But for the main part, the people leading sessions are very much drawn from the ranks of social-media, most of them have a public profile that we can all engage with, and I hope that includes me, who will be talking about Interactive Wipeboards, Free Web Tools and Personal Learning Networks, so do make sure you stick around. I’m also thinking of using pyrotechnics, but airlines frown on that sort of luggage.

I wonder how much you see your time-line related to what conferences you’d go to, or how much you’d pay to be there? With that in mind, I thought I’d do some digging (not in Minecraft for once). Here’s how I see the connection/value proposition.

I was using some Twitter tool yesterday that analysed (somehow) what it thought about people in my time-line. It’s purpose was to ditch the spam-bots on the whole, but interestingly it also highlighted people who never reply, yet have a few thousand followers. It also decided that for the most part, these people post links that have already appeared to that audience. In terms of ‘worth’, it suggested that I ditch them and spend time focusing on people who engage in conversations. The statistics we’re shocking, as were some of the names – people who occupy (out of digital-realms) the title of leader, of the 20 or so it found in this category, it said that they spend 80% or more of their time posting links that have already been posted, but without an RT or Hat Tip to anyone else. Basically they strip the source and claim the find as they bounce it to their followers. In addition, they reply to their followers less than 7% of the time, and less than 1% of the time to people they don’t follow.

While for many, Twitter has become a fantastic source of links and Twitter has become the number one place that people bookmark them. I find however, as I look back at some of these ‘tweetable-moments’ from this group – that what they are doing is feeding yet more content to an audience with little regard for reader. It basically say’s in a random way – this is important to me, so it should be to you without bothering to explain why or how it can be implemented. When I look at some of the content, it is often about reform and how we actually need to stop feeding kids content and think of new ways to engage them in their own development. So while they are ‘on’ Twitter, they are behaving in exactly the way they themselves see needs to change and without really thinking, reinforce the very transmission-culture that locks people into frustrating positions as they whip up more moral panic. I don’t even know if they are aware of this.

So, when I look at the TLV Conference (or any gathering), I look for connections. A quick head count – who are these people in my time line and what do they want?. I’m less interested in what they do, or tell others to do, but to try and figure of how much I have in common with them – and then to look at them and see how they generally behave in the time-line towards others. I use Twitter for one thing – to build stronger process-networks. If I want to harvest links, I’ll use a tool that does it more efficiently that sitting on Twitter watching the pixels fly-past. I think that’s what people want at conferences too. They are not going for Professional Development – they are going to bolster their connections and seek new ideas – to find collaborators who in some way can help them get what they want. If the speaker is a FIFI (fly-in-fly-out), never really engages with the audience beyond their presentation – and doesn’t feel like helping without being paid, then I can add them to a feed – but I won’t waste much time attempting to connect (or court) them – as that is not what they want. At the same time, I don’t want to pay to hear something I can hear online anyway. It amazes me how many times some speakers deliver the same message. It’s like, “oh, I haven’t presented here, I use this one”, as if everyone their are about to speak to, are not connected to thousands more. I’m more than happy to @ping that, something Shelly did a few years ago at ISTE, when they keynote gave the same speech to educators as they delivered a few weeks before to a corporate biscuit manufacturer. There should be a rule – 80% of what you use, you should not have used before. Further more, I don’t want to hear war stories of a glorious past, nor a review of the bleeding obvious.

I am happy to hear about what you are doing however, and delighted if you invite me to join in. However, like so many people at conferences, who are spending time and money – we don’t need motivating, we need to know how you did it, and how we can do it too – not one day, but now. I hope what I’ll present with do that – and perhaps provoke some thought. What I want from speaking at a conference is simple – to connect to those in the audience that want to discover, experience and explore with those I’m already connected to. Any conference or event that does that – has an impact for the audience. We all know eventing is an industry, the problem is – the audience want to participate, not just consume.

The greatest sword figher in all the world delivered pizza

Something stuck me yesterday, as Peggy Sheehy and I showed an extensively academic gathering the work we are doing in WoW in School and Massively Minecraft respectively — Can someone really understand what we’re doing in games, if they don’t also actively and visibly participate in broader digital-cultures?

By that, I mean something more significant than knowing about, having an opinion of and be paid to debate. Let me rewind a little and say everything that is happening in digital media today was somewhat predicted in much of the cyberpunk fiction that pre-dated the internet. Co-incidence or connection? Games emerged as much from this as from ancient Rome or Greece, or research. Novels as Neuromancer by William Gibson (1984), Hardwired by Walter Jon Williams (1986), Ghost in The Shell by Masamune Shirow (1989), Snow Crash by Neil Stephenson (1992), Vurt by Jeff Noon (1993) or even Doctor Who in the 1976 episode “The Deadly Assassin,” where the good Doctor enters the Time Lords’ ultimate computer — called the Matrix.

“The sky above the port was the color of television, tuned to a dead channel.” (Neuromancer) is perhaps the greatest opening line of a book ever. It almost sums up the entire novel.

I think, that without being ‘in’ digital media, by which I mean participating in the various channels where discussion is active, frequent and global that taking a theoretical view of what games are or are not – what fun, play and immersion means –  is very much like Gibson’s opening line – being tuned to a dead channel.

It is not only easy to buy today’s game influencer books on Amazon, it’s also easy (and recommended) that you discuss them in the same medium, in spaces such as Linked In, Twitter, Facebook and the numerous game-focused blogging communities commonly inhabited by Indie Gamers. Unlike closed academic journals, these things are not locked behind an exclusive pay-wall to an exclusive audience. Even students creating games at University will be highly active in these spaces, while those that are teaching them are absent. The students know the power and value of these spaces, and how they are connected to their learning and audience.

James Gee, in his book “What Video Games Have to Teach Us about Learning and Literacy” calls this an active, critical learning principle. Learning about and coming to appreciate interrelations within and across multiple sign systems (images, words, actions, symbols, artifacts, etc.) as a complex system is core to the learning experience. Like all authors, others can shoot at Gee’s work, as is the nature of the discussion, but I think it also makes sense to consider Neil Stephenson’s view in Snow Crash “She’s a woman, you’re a dude. You’re not supposed to understand her. That’s not what she’s after … She doesn’t want you to understand her. She knows that’s impossible. She just wants you to understand yourself. Everything else is negotiable.”

While I admire and respect the academic research process, the time invested and the methods applied, I struggle to see how games scholars can exist externally to the very medium that is creating an entirely different experience of gaming. That means participating in online spaces and discussions where everyone’s reputation starts at zero and is earned though the culture of participation the demands. There is no RPL unless you can transfer and apply it in a relevant way to that audience, in mediums they prefer.

What I’d like to know, should our work not be seen as important or relevant is this … What have you created in the metaverse that we can learn from to correct our foolish errors to make the experience better for our players? [just supply the link].

Please avoid dive-bombing the work of non-scholars (whatever that means) from a lofty perch, as that position is somewhat of an assumption. The greatest sword fighter in all the world delivered pizza.

Twitter ate my brain and I liked it on Facebook

Too much information hitting you too fast? Are we pushing information at educators simply in response to the massive multiplayer game known as Twitter? Maybe so and here’s what I think is causing the potential edu-Snow Crash.

First, I’ve been on Twitter 97.8% longer than everyone else according to some info-mining algorithm. This must indicate I know more than all but 3% of the planet which entitles me to speak with authority. I also have a cute avatar and willing to drop a button on my shirt at a conference for the boyz. What rubbish.

Second, back in the day, blogging was kind of slow. People took time to write, time to think and time to respond in what seems today a very civil conversation between people who had the sense to learn how to search properly. So back in the days when young Will Richardson got a glimmer in his eye and wrote a book called “Blogs, Wikis and Podcasts”, people we’re already connected to a network. Then came people like Clay Shirky and added a dose of moral panic with tales of civic-technology-saves-the-princess and someone kicked off TED talks and scooped $6,000 a seat and a bucket load of ad-revenue. The “PLN” was born – and all of a sudden, it’s not cool unless you’re tweeting motivational messages or summizing Prensky on your IWB. No one got more literate, they got more distracted and a few got paid or joined the Spice Girls.

Twitter is increasingly useless on purpose. It wanted, and has manged to become, the worlds most used bookmarking service as people like @grattongirl endlessly fling link bait into the metaverse and we follow Captain Obvious to whatever bloody web conference he’s at today – RT-ing his own Tweets and telling people what we should do, before hitting the buffet. If you want to be cool, that’s the way to do it. Then we have the social climbers – those who don’t do much at work apart from Tweet, feasting on their public funded iPad until it’s home time. If you want to get ahead, get on Twitter. Bugger reality, just keep saying it and the drones will believe you. Guess what you’re still in reality. Take a look around.

A neutron walks into a bar and asks how much for a drink. The bartender replies, ‘For you, no charge.

Reality check: Twitter isn’t what it was (let’s have a beer and talk about glory days later). To put it into perspective – its the Internet equivalent of CNN’s screen-ticker. It’s designed to distract and hold your attention only long enough for you to snack on link-bait (and download Snack Games to Snack Apps) and thereby pay less attention to the big picture above which is often full of rhetoric that we’re also supposed to consume without question. And we do – as Twitter is the ultimate bar-tender, happy to listen to anyone and everyone. I wontz my MTV, Kittahs and links to Fat Kid on a Rollercoaster as long as it doesn’t stop me yelling at politicians on #qanda where I endlessly ‘top’ them with my dazzling appreciation of culture, media, politics and religion. Is that what you really want to show teachers? Yes, of course someone will pay you good money to do that in a workshop so I hear.

It’s called information fluency. Take a breath, learn from someone like Judy O’Connell. Do you think Judy is drooling over her iPhone, tapping refresh like dog trying to scratch an unreachable itch? No. Do you pay enough attention to what Judy’s been saying about the Semantic Web? – Nope. I just tap my screen and RT things, unless I’m being really cool when I RT it to #yam to impress the boss.

Judy – like many other curate their information sources, as they know how to organise it into useful collections for a purpose. I’ve been to Judy’s house – there’s no digital dumpster out the front.

If it takes a 3 seconds to read a Tweet, it takes 30 to follow the link – it takes 3 minutes to read the post and 3 hours to digest what it said (assuming it is a post intended to make you think). That is nuts, no one can process that kind of information. If however, it comes to you, behaves itself and sits in the spot you want it, then like a good dog – you are it’s master. No one wants a dog that barks and bounces around when you’re trying to think. 90% of links that get RTs are not about getting people to think – they are like information coupons offering you a discount in the knowledge isle, or about you buying into someones Top 10 hyperbole.

This would be one of those circumstances that people unfamiliar with the law of large numbers would call a coincidence.

This is the tragedy of blogging these days – people want a free coupon not a conversation – we want it now and we don’t want to work for it. It just may be that we are now more dangerously irrelevant than we’d like to admit.

I thought the the point of social media was that it could help fill the (_____) gap in thinking, and yet, just a few years on we’ve managed to invent snack-media. Yey for us … for we are many and they are n00bs. There, I said it, leave a comment in 140 characters of less  or just maybe go and blog something that tells me a story that changes everything.

And please follow and RT @massMinecraft if you notice it *wink*

Game Boys – The rise of gaming

Game Boys: Professional Videogaming’s Rise from the Basement to the Big-Time started out as a New York Post article by author Michael Kane. Justin Kownacki has a great review of the book from a storytelling perspective on his blog.

I found it interesting post, simply as he’s pulled apart the way the book is written, not just what it says.  Theres a lesson to be learned here for me, especially in attempting to describe the benefits of game based learning to an audience for whom 1% will have much game experience, and probably only as a casual gamer. Having said that it looks like a great read, and something to learn from if you are intending to put an argument to an audience about anything you’re interested in. In the same vein, I take the time to read his recent post on “Are you a maker or a seller“, which again is food for thought.

  • Show Your Audience Their Own Way Into the Story
  • It’s Never About the Plot; It’s About the People
  • Every Story Is a Mystery, So Reveal Your Information Strategically
  • Use Backstory to Fuel the Plot
  • Kick Every Ball Forward at Its Own Pace
  • Use Lingo to Unite Your Audience, Not to Alienate Them
  • Establish Dueling Expectations Within Your Audience
  • Harvesting the Seeds You Planted Long Ago Creates Closure

How can we help you to learn with mobiles – PBL project

One of the fantastic project based learning solutions that came out of our Massively Productive #red project with K12 distance and rural educators was “How can we help you learn with mobiles”.

The problem statement surrounds the high numbers of students simply don’t respond to using a learning management system. They don’t log in, rendering all the instructional designed course beyond  unprofitable. This problem leads to a series of escalating pleas, threats and punitive measures which are largely ignored in a game of distance cat and mouse. As the project sketch played out, discussions turned to the transmissive use of SMS messages by schools. It seems most schools use SMS to tell parents that students are not attending school, however the gateway is not used in duplex – students can’t SMS teachers. The irony is that mobiles are banned for students  yet assumed that parents have them – as this is useful to the functional needs of school administration and proof of action. Mass SMS-ing, I am lead to believe is common practice at high public cost with un-reported results in it’s impact on improving student performance or attendance. It obviously ticks a complience box, but if this is all mobiles and SMS is seen as useful for, it’s quite depressing.

Giving students the teacher’s mobile was seen as risky, as was holding the student or parent mobile number on the teachers phone despite this information often being available via administration systems to teachers to call them. The convention is to use the official school phone to contact, or rely on the school SMS gateway to transmit a punitive message to the parent, which one assumes is then relayed to the child – assuming that is possible. In many cases the parents ignore it as well.

The project, as always, needs to make a product, and a case to an audience. The idea was to look at how kids use their phones to learn and to communicate – bringing in aspects of recent events in the UK, how developing nations are using phones, and some quantitative research around the students and their community. This case would them be presented to the people who are running the SMS transmission gateway, in order to argue how it might be better used by students to access and participate in online learning – especially in areas where actually accessing a computer and the internet is proving inadequate.

What is impressive here, is that this project was rendered by a group of teachers, brand new to PBL, in a day as their first. It is wide enough to work at all ages and stages, it has ties to current issues, known frustrations and solves a very large problem that both teachers and students face. Best of all it takes the case to the people who make decisions, policy and rules about the use of phones. The group mapped the project it NSW BOS outcomes, ISTE NETs for students and ISTE NETs for teachers and suggested several great ways of assessing the project. Best of all, it drives an innovation – as the guiding questions use SMS for delivery and response to the students. You might think this is too simple or limited, given the access we have to LMS, blogs and wikis. Consider though, that very high numbers of students simple do not respond to anything. Responding via a text might well be the first level of engagement with learning they have had in a long time.

Gratz! to the group for working so hard. It illustrates just why PBL  allows teachers and students to find, and solve meaningful problems – not cover content-standards, but leads to visible social action.

More than Willing2.0

Innovation is a damn tricky thing, a word really easy to use but really hard to evidence.

I don’t think people on public social networks engaging in shared exploration who best to weld technology onto education are innovators. There are plenty of people who harvest ideas and information more than they do create it. Let’s not forget, plenty of social media messages are little more than advertising messages. Innovation isn’t popularity.

Innovation starts with people looking to not innovate, but understand the need for innovation. If they are using Twitter, they’ll be using it not to advertise their next Web2.0 Workshop, but to scan the horizon and find others to be stakeholders in their ideas, never consumers or buyers.

Let me poke a few people here. @teachpaperless, @pgsimoes, @kzenovka, all three are innovators, but in different ways. They not only make stuff, they also create plaforms from which other people make even more useful-stuff (not more of the same stuff).

They know stimulating ideas lead to an incubation period where they (and others) make prototypes, fail, try again, improve and talk about better ideas. Ultimately they are smart enough to know if what they (and their friends) are doing has a potential to scale – and only @teachpaperless has a beard.

My point is, that social networks do not create, or carry sufficient information for anyone to know if they are innovative or not. In many ways social is dead, many of the innovators have made sufficient connections (See Stephen Downes), to only need the most convenient process to check their ideas are not dead-yet. The tragedy of spending life with non-innovators is that they’ll flog a dead ideas, rather than innovate.

Users might become more enlightened, or be able to remember new and interesting facts – but that isn’t how the world get’s re-jigged. Radical ideas create ‘roots’ – as that’s where the word comefroms. It’s okay to Tweet radical verbose rally calls, but are you backing that up with new roots – or are you trying to harvest the crop of people what are more than Willing2.0 to buy into last years ideas.

Innovation, according to Steve Collis is something like this

“No program. No tests. No teacher talk. No outcomes. No bureaucracy. The students will show up on day 1, and will begin to define their own learning pathway as they find clarity regarding where they want to go.”

Now that is clearly the work of a deranged mad-mad. Not satisfied with blogging it, he made a video, to explain this utter insanity.

How do we know this nut-bag can pull this off? – How about looking at the last idea that he scaled?

So if you are looking for innovation, you don’t see it on the news, you have to dig it up – and more importantly connect. I first bumped into Steve wandering around Second Life years ago – and guess what he was talking about? – the same thing in his G.A.T video … innovators are pathologically driven, often for years – and that’s why they are so interesting … if you are lucky, we occasionally see glimpses of their thoughts in their tweets or blogs … but it’s only when you see what they did at scale – do most people even notice it.

The last people to be informed will always be human resources – they are too busy Googling people still.