IoT missing from the future in 'Her'

Spike Jones’ latest film ‘Her’ is striking in how easy it is to imagine as the future. Singularity aside. Yet I was left wishing that Samantha could actuate back out into the real world. Imagine if she could interact with the physical world in a way that didn’t require human proxy’s – imagine if she could see form any surface, and move any object. Feel if something in the room was hot or cold and change its state.

I couldn’t help but think that if she had some way of interacting with the physical world in such a direct and fast way as she could interact with other AI’s, she would have not only been a better human helper, but she might not have had such an existential crisis and may not have joined with the other operating systems to leave people behind – thereby creating a precursor to a cylon style battle.

A few of the things holding the IoT up

I had the privilege to have a great conversation with Alexandra Deschamps-Sonsino last week, who was in town speaking at WebDirections South.

I asked her about what she thought was ‘holding up’ the IoT, and she had a great answer that really left people satisfied:

People need to be patient, it took the microwave about 50 years to become popular, and now look at it, almost every household has one.

While I think this sums things up nicely, there are a few underlying themes at work here that are worth exploring:

A few of the things holding up the IoT:

1) Individual products don’t always make awesome networks.
2) There’s no critical mass
3) People are building ecosystems to make it feel like a critical mass (potentially good, potentially disastrous).
4) Make things that couldn’t be made before, not just quicker and faster things.

1) Individual products don’t always make awesome networks.

One of the problems with the IoT is that each new ‘thing’ is built to be the best it can be. ..not built to be the best part of a network it can be.

That is to say that devices are often built with one particular function in mind. They’re built to be the best connected light globe, or picture frame, or security device they can be. That’s great, and
that’s why people will buy it. However you should always design into the object an openness that allows it to be recombined with other objects. That’s where IoT objects get their value, ageing well as they come into contact with new situations, just like your favourite vintage leather bag.

For example, if you’re making a lamp that turns on when your friend turns theirs on, obviously make that functionality amazing. It’s what will get people in to buy it in the first place…it’s certainly harder to sell the possibility of what it might do in the future. But if people could buy that device knowing it will work with services in the future, that it will continue to grow with age, they may be more inclined to buy not just that device, but other devices to go with it.

Because people don’t see the value outside of the (often singular) function they buy it for, it’s hard to justify the expense. Therefore the uptake of devices rises slowly.

There’s certainly an argument for using standardised languages that will be around a few years yet like HTML/Rest, but that’s another post for another time. Look up the web of things if you’re interested.


2) There’s also a lack of critical mass.

There’s a lack of critical mass both in terms of the lack of devices being made (see (1) above for one contributing factor), but also a lack of critical mass in each workplace or house due to the huge number of areas the IoT is trying to take on at once.

People are building devices in lots of different areas, for lots of different purposes, which is great, it means there is a useful purpose for the IoT in almost every industry you could think of. But because the domains and efforts of people building IoT objects is spread so thin, the ability for one device to collaborate with another is diminished by the lack of neighbouring devices in the physical or industrial space.

For example, it’s much harder for the IoT to take off if there is one different type of location beacon for each type of industrial plant, as opposed to there being 50 different types of devices for the food manufacturing industry. If there was, that critical mass in one industry would spread much more quickly than the distributed efforts you see.

Also, domestically, as opposed to having 40 different types of light globes and security systems, what happened if we branched out a bit and tried to connect every type of object you have in your home? Maybe then people would start to see some real value.

Just like a social network that no one’s on, this lack of a feeling like there are no other IoT devices around can make it feel useless. It’s not necessarily the lack of technical ability to communicate with other devices that’s missing, it’s just that there might not be too many useful connections to make between IoT devices when they’re spread so thin across industries.

3) People are building ecosystems to make it feel like a critical mass (potentially good, potentially disastrous).

The way manufacturers are getting around this chicken and the egg syndrome of there not being enough devices in one area is to artificially create their own critical mass. Companies like Smart-Things are building their own ecosystems of devices for the home, such as temperature monitors, thermostats, alarms and motion sensors that all work together to allow more sophisticated actions to be performed.

The problem with one company creating all these devices that work together is that sometimes a lock into that one environment and one company is created. This can inhibit growth compared to a household being able to choose devices from any and all vendors as they come out. One company will never be able to keep up with the variety of what other people are doing.

This artificially created ‘ecosystem’ has the potential to slow or rapidly accelerate the growth of the IoT. Developers who make a fake feeling of critical mass need to make sure they are cross compatible, and adapt to work with other devices as the real critical mass builds with lots of devices in many different domains.

4) Make things that couldn’t be made before, not just quicker and faster things.

I’ve mentioned this idea before.

Whilst the microwave in many ways just does the old job of an oven more quickly and conveniently, the IoT has the potential to create entirely new classes of devices. At the moment it’s focusing on doing things like monitoring and notifications..things that make money by cutting down margins. There’s a big business in that, but it’s not focusing on things that couldn’t be done before. That’s the big takeaway for the IoT. For example, instead of simply monitoring someone’s heart rate with a basis watch, combine it with the knowledge of their eating patterns and the GPS from Moves app, and offer some useful insights we couldn’t have known before.

Implications for design:

This is why for the IOT to truly take off in my mind, we have to start designing as though our device lives in a big ecosystem by providing control over the device beyond what is needed for that original single purpose, whilst at the same time perfecting that single purpose so you provide a level of value that is immediate from the outset.

Of course things are designed as single purpose.. there’s no existing network for them to be sold into (eg. since there are a lot of webpages, web browsers are sold as multipurpose tools). But as the critical mass grows, both through artificial and natural means, the gaps will fill in and we’ll see a new class of IoT device rise in prominence.

IoT: Beyond information to purpose

Public perception of the IoT is missing a crucial middle ground – solutions that can offer things we’ve never been able to do before.

When I talk to people about my research and use the term ‘The Internet of Things’, often two polar things come to their mind. Either they think of ‘smart’ meters, ‘smart’ grids and ‘smart’ traffic, a mass of centrally connected sensors that would help to keep a city running, and something they will likely not have to think about too much. Or they think of a poor man’s Jetson home, where every plant has a twitter feed that lets them know when it needs to be watered – that is things that might help us keep an eye on what’s around us, but ultimately require us to babysit them.

People who think the first often fail to see how the IoT can help them beyond a bit of energy savings and a little less congestion to get to work. They fail to get excited and their imagination never lights up.
People who think the second do let their imagination light up, they wonder what it would be like if their objects were suddenly given a voice. But they’re also sceptical and scared. Skeptical that it will ever gain popular appeal because of the added cognitive load each new invention brings, and scared that it will overcomplicate and encroach on their already precious time.

I feel these two examples are missing the biggest selling ground for the IoT, the ‘Killer App’ if you will. In the first example, all the data is taken from sensors and passed back to a central algorithm which comes to conclusions and makes changes with little room for human intervention. It’s called machine to machine processing (M2M), and it can be very useful in situations you want to fade into the background, like traffic control. In the second example, all the data is passed from sensors to a human (M2H), with little room for machine intervention. Thereby filling up our already precious time and short attention spans with more information.

There’s a middle-ground. One of the reasons health has had such a big run in the IoT press is because it is something that could not have been done before. It’s not just a faster way of moving traffic, or finding out your plant is dry. With health monitoring people can see the real world outcomes for themselves through preventative healthcare. Sensors that monitor small changes in your body and environment over time can alert you only when they could lead to a serious problem down the track. Done right, in some cases it turns the idea of going to a doctor when symptoms have presented upside down so that you know and can adapt to a problem while it can be nipped in the bud. It’s not simply doing something faster, it’s taking away a problem.

The Internet Of Things will give people capabilities to track and understand things they could previously only make inferences about.

Where to from here?

There’s certainly a time and a place for M2M and M2H. Human attention is a precious commodity and most things, like traffic flow, will need to just fall into the background. Conversely, we may still want a say in many things that are a little more personal to us, such as monitoring our sleep.

Healthcare, like most things, will require a carful balance of both machines taking charge (M2M) and interactions with human players (M2H). And the tricky

The main point is that I believe for the IoT to really find a place in the hearts and minds of people everywhere, we can’t just be talking about automation, where things are quicker or less intrusive. We need to be talking about how interconnected computers everywhere allows us to do things we could never have done otherwise. We need to be talking about things you couldn’t have done before. Not just a more efficient way of switching off the light globe, or finding out your plant is dry. We need to really think through the new possibilities having billions of computers in everyday objects affords us.

The difference between the IoT and Ubicomp

When you talk about the IoT, it helps to have some history. Knowing where the IoT sits in relation to other developments, and how this buzz-word is different to the last can help define the idea.

Perhaps the most closely related term/movement to the IoT is Ubicomp (Ubiquitous Computing).
Just as humans evolved from chimps but share 99% of our DNA, so too has the IoT evolved from Ubicomp. That said, there are some pretty big differences between humans and chimps. For starters, one really took over the world compared to the other..

In a nutshel:

The defining point for Ubicomp is: Having computational capability in many different (perhaps all) objects.
The defining point for the IoT is: Having these objects all connected to an internet.

Yet since it’s hard to see how you could connect an object to other objects via the internet (IoT) without having some sort of computation in it (Ubicomp), being an IoT device doesn’t stop an object from being Ubicomp as well.

Therefore I’ll say they the IoT in many ways extends the ideas of Ubicomp, but has branched off to somewhere else. Somewhere much more special.

To muddy the waters a little though, while Ubicomp does not require an internet connection, it does not preclude it either. So where do we draw the line between the two?

Ubicomp is:

Ubicomp is: the annoying bell that chimes when you walk into an old shop. ..where a tiny computer that works out if you have come through the door and chimes a bell. It does not use the internet to connect to any other computer outside of itself. If however you added in internet connectivity to the device, and allowed it to make connections with different devices, you start to build the IoT.

Ubicomp is: an intelligently programmed doll that speaks when you move its leg. Whereas the IoT would require it to have an internet connection that could interact with at least parts of the doll. This could be the control of the voice, measuring how the leg is moved, or more.

IoT is:

What the IoT refers to in my mind is not simply the technical combination of internet and object, but the possibilities that exist when numbers of these objects are networked together.

“The value of a telecommunications network is proportional to the square of the number of connected users of the system” – Metcalfe’s law

If a Ubicomp device included the ability to connect to another device, it was traditionally much more of a ‘remote control’ model, where one server connected to one device ..to read data, to change it’s settings, to update it’s state.
Whilst this falls within the scope of the IoT, the IoT would also (I argue) suggest that that device could be ‘used’ by many other devices. By that I mean that the data a device generates needs to have the potential to interact with many other objects and services.

In a similar vein to differentiating between an ‘ad-hock’ network of a few devices and the ‘internet’, so to do I differentiate between Ubicomp and the IoT. They may both sometimes use the same ‘tubes’, and they may both have the ability to communicate.
However the scale, the intention, the interconnectedness and the possibilities are order of magnitudes higher.

I’m looking forward to thinking this through a little more. If you have some thoughts, drop me a line!

When to type and when to talk

Dictation on computers gained momentum a few years back, and today you see people walking down the street taling into their phones to capture ideas. So as someone who writes as part of their job, where has it come to sit in my workflow?

Use a keyboard when you want to:

1) To keep other ideas in the back of your mind while you word things out.
2) To park other ideas and get everything down while you expand one idea.
It’s like multi-tasking and focusing at the same time.

Dictate:

1) When you just need to get one idea out and it’s already fairly well formed – that is you’re pretty confident you won’t need to go off on tangents.
2) When you have notes and you need to turn them into a cohesive paragraph – it’s easy to write and not make sense, it’s harder to speak and not turn your ideas into a cohesive story.
3) When you need to make new connections. If you have a few thoughts down on paper but you’re not quite sure what the punch line is or how they all tie together exactly, try explaining them to someone – you’ll soon pick up on what you think the key hooks should be.
4) When you’re literally just trying to get data into a computer I find it quicker to dictate it in.

When to followthrough, and when to jump

Navigating the complex field of when to followthrough on something, when to jump ship to something new, and why you need to do both to learn.

Growing up I used to be a bit of a collector, I would follow things through to the end, simply because I felt I needed to. In a time before SMS could easily be backed up, I started writing down my messages. But instead of writing down the ones most important to me, I’d write down everything, even spam. I would log everything I touched.

I figured it would come in handy one day, all that data. More to the point, I couldn’t help myself. This had it’s ups and downs – I could tell you how many time’s I’d messaged my ex, but it took a lot of time out of my day for not much return.

Over time I tried to reassess. Just like most people are happy to only read the summary on the wikipedia page, or own the gym membership without going, I started skipping over things a little more quickly. Aiming subconsciously for something more along the lines of the Pareto principle, I tried to learn 80% of something in the first 20% of time. I tried jumping between things quickly – different cities, different jobs. I learnt a lot about a wide number of things, which has already come in very handy, but I forgot that in that final 20% is almost all of what you really have to show for something. The output.

That final 20% has all the hard problems that teach you the universal values. It has all the building and tinkering and making that really tests how well you assemble all your knowledge together into something new you can show the rest of the world.

So while it’s exceptionally useful to have a wide knowledge, you really do need to go deep in a few areas, in order to find personal satisfaction and to make a difference. What I didn’t know back when I collected everything, was that you don’t have enough time to go deep in everything – you really have to pick.

Jump around for 20% of the time, and across 80% of the topics, but for 80% of your time, really knuckle down and produce things. It’s this followthrough that gets you creating, and it’s only by creating and discussing that you get an idea of where to dive deep into next. So like evolution, jump around and stretch out wide then pick the best and followthrough, stretch out wide then pick the best. Repeat.

It sounds like common knowledge, but diving deep and actually outputting something, not just learning by consuming is something I have to actively focus on doing.

35+ and working in Tech?

I have a friend who is 39 years old and codes location based mobile and payment apps for large enterprise. He moved city only a few months ago, but is already on the search for his third job there. He’s a smart, connected, up-to-date guy, who’s good at what he does – so why hasn’t he found the right fit? We sat down for a drink the other day, and he candidly relayed some of his recent experiences. It sounded like a conversation I’ve had with other friends too.

There are four main problems people over 35 were facing:

In short it boils down to perceived value.

1) Fresh slate:

The idea that technologies are changing quicker than people adapt to learn them. So younger employees come ready to learn the latest ways of doing things, whereas older people come laden with ideas of how to do things based on how they’ve done them before. In any other industry that ‘baggage’ is called knowledge, but not in tech software engineering.

2) Perceived value:

If you’re not 20 and just out of Stanford CS, you mustn’t know the newest things. If you’re 35+ you’ll be asking for the same wage as the 20-year-old, but you’re seen to bring less to the table.

3) Expectations:

When you’re young you make friends at the office, eat at the office, live at the office. When you’re older you’re seen to have an outside life, like a family, that will take you away from work at a reasonable hour. Some employers believe company culture will suffer when you don’t live there, and they still pay the same salary for less hours.

4) Overqualified:

The young people in the company expect that when they’re 35+ they’ll have moved into management, or have made your fortune and just be coding because they love it. If you’re over 35 and doing what you’re good at as standard employment, you can been seen as a failure at worst, or overqualified and likely to leave soon at best.

My friend is trying to change how he markets himself, but there’s only so far he can hide his age, and often he still gets boxed up and discounted before he even opens his mouth.

Learnings as a younger person: get very good at telling the story of what you do, why you do it, and how you give yourself value by doing it. Also, networking!

India’s upcoming UX explosion

The amount of work for UX designers will increase dramatically in India over the coming years. Increasing western influences, a growing economy, and a lower cost arbitrage are converging to make a perfect storm for UX designers looking for a challenge.

A quick scan of the classifieds and conversations with colleagues tells me opportunities for UX designers in India at the moment are limited compared to other markets. But much of this might be about to change.

As UX designers, we know that providing a seamless customer experience can benefit an organisation not only be increasing customer satisfaction and repeat business, but also by nipping a lot of costly support issues in the bud. Historically in India however, companies haven’t made the upfront investment in perfecting customer journeys, because it has been more cost-effective to employ cheap labour, repetitively relaying the same support information, than it has been to fix the root problems.

Attitudes like this are currently in the process of change, giving rise to positions for talented UX designers to have huge scale impact in a country with over 1.2 billion people.

These three factors are the driving forces behind this shift:

 

1. International expectations

India is currently experiencing some of the strongest economic growth in the world. The cost arbitrage that bought overseas work to India, famously in IT outsourcing, is slowly disappearing as local workers have become some of the most experienced IT practitioners in the world. As a result, professional labour is beginning to reach price parity with other countries. This is encouraging many of those who left overseas for higher wages to return home, and has translated into increased worldwide mobility for those who never left.

This influx of well paid labour is bringing back with them experiences of the west, and pre-conceived notions of how companies should operate. Interestingly, is is also fuelling a growth for the long-lived western motto of ‘Time is Money’ – a concept that has not traditionally underpinned the Indian corporate system. We are seeing people with this overseas experience increasingly attracted to brands who operate in familiar western ways, and treat their time as expensive.

 

2. Customers are spending more – keep them happy.

With this increase in professional pay, customers are spending more and becoming more individually valuable. This makes keeping each customer happy more economically important to an organisation. It also makes keeping each customer happy more viable, as increased sales off a single individual often means increased margins and increased capital.

 

3. Prevention is now cheaper than cure

These changing expectations of how companies operate, especially from the increasing middle and upper classes, mean people are coming to value a one stop shop for support if a problem with their product or service does arise. However employing such well trained support professionals is, as mentioned above, becoming increasingly costly.

Simply put, it’s becoming increasingly cheaper to take some time and fix the root problems causing customer dissatisfaction than it is to provide increased levels of the type of support demanded from the most valuable customers.

The cost benefits to prevention through good UX don’t stop there however. Organisations in India are now learning that great interaction design can help establish customer rules and expectations, leading to more predictable behaviour and less need for individualised support. This good interaction design helps prevent staff from answering repetitive questions, and allows them to focus on more challenging work, leading to increased staff satisfaction, thus ultimately customer satisfaction, and staff retention. Retention, with rising costs of training and professional wages, is also becoming a hot topic to keep an eye on.

 

Companies looking to provide a different, often more western, user experience than what they have traditionally practiced will need help. Companies that increasingly invest in keeping every customer happy will need help to continually polish that experience. Companies that are looking to alleviate unnecessary support issues, increase staff retention and manage customer expectations, are going to need help to do this too.

These are three areas we’re seeing substantial growth in, and three areas that require top notch UX designers who are able to translate international practice into locally appropriate outcomes.

The unmentioned challenge behind all this is that user experience design and design thinking practice is new and not well understood to many organisations in India. Meaning they are searching for solutions to the three problems above, but may not have UX on their radar. Many UX organisations have been slow to embrace the Indian market so far, with the amount of client education and cultural understanding that needs to take place. My prediction is that those who put in the groundwork and lay a good foundation now will find themselves sitting on a gold mine full of interesting challenges in the upcoming years.