AI in the Workplace: Everything You Need to Know

ai

While there has been plenty of talk about how artificial intelligence (AI) will transform the workplace, so far the effects have been subtle and slow to reveal themselves, although the scale of the oncoming change is starting to become apparent.

The ability of computers to learn, rather than be programmed, how to carry out specific tasks puts a wide range of complex roles within reach of automation for the first time.

While this fresh wave of automation is not yet widespread, today there are glimpses of how profoundly these new capabilities will change the nature of work: Amazon Go’s cashierless supermarket where shoppers just grab what they want and leave, the thousands of Amazon Kiva robots that ferry goods to and fro in the retail giant’s warehouses, and the pairing of AI and IoT sensors to carry out predictive maintenance on ThyssenKrupp elevators across the world.

The use of AI in the workplace may not yet be commonplace, but its use is likely to impact every industry in myriad ways.

How are businesses using AI today and where is it having the most impact?

While the uses of AI are fledgling, there is no shortage of pilot projects, and trials of AI-assisted technologies in the workplace span retail, manufacturing, sales, customer service, logistics and office management.

Bots and virtual assistants

As machine-learning trained systems gain the ability to understand speech and, to a lesser extent, language, so the prospect of automated chatbots is becoming a reality.

While such systems are typically still limited to simple question-and-answer scenarios, retailers are experimenting with using such bots to answer customer queries and to help staff respond to questions.

One such example is the UK electronics retailer Dixons Carphone, which used the Microsoft Bot Framework and Microsoft Cognitive Services to create a conversational bot. The bot, named Cami, is being used to answer questions through the company’s Currys brand website and through Facebook Messenger, helping staff and customers to find products and check stock.

Similarly, the telco provider Three worked with consultancy Redant to develop an IBM Watson-based “multi-channel sales provider”, designed to answer spoken or text-based natural-language queries based on its analysis of thousands of documents, product details, customer reviews, and social media posts. Coca-Cola also was pleased at how a customer-service chatbot based on Nuance’s Nina was able to handle simple customer queries.

However, many of these offerings still seem to be at an early stage, and are still relatively limited in the scope of customer interactions they can handle.

Google demonstrated the potential of chatbots recently with its demo of its Duplex system. Duplex was demoed ringing up businesses such as a restaurant and a hairdressers and booking an appointment. The system seemingly sounded and behaved enough like a human, even dropping in the occasional bored-sounding ‘mm-hmm’, that the person speaking to it seemed not to know they weren’t talking with a person. While it was a very impressive demo of speech synthesis and natural-language understanding by a chatbot, it was unclear to what extent the demo had been edited and how many failed attempts preceded it.

Other business-oriented bots have more of an enterprise focus, with software giant SAP developing SAP CoPilot, a digital assistant designed to help with tasks such as drafting purchasing contracts, as well as with collaboration between colleagues, via its ability to answer business-specific questions such as “What’s my total spend with vendor X?”.

Some of these offerings blur the line between bots limited to conversing about a specific topic and the broader conversational abilities of virtual assistants.

Examples of business-focused virtual assistants include IPSoft’s Amelia, which can serve a range of roles, including support-desk agent or case researcher at a law firm, following several months of training from humans carrying out the role and ingesting annotated documents. Amelia is not just a trainable chat assistant, but also a conversational frontend for IPSoft’s 1Desk, which automates a range of back-end enterprise operations.

Household names are also muscling into the area of creating a virtual assistant for the enterprise space, whether it be Amazon with its Alexa for Business or Cisco’s business-targeted AI.

As with many AI-assisted technologies, the aim of using chatbots and virtual assistants appears to be either making existing employees more effective or replacing manual roles.

IoT and analytics

Analytics is nothing new, but the sheer volume of data it’s possible to collect from cheap sensors and IoT devices is at an all-time high.

Making smart decisions by applying machine-learning to this data can pay dividends, as international elevator company ThyssenKrupp is discovering.

The German firm uses Microsoft’s Azure cloud platform and Azure IoT Suite to analyze data from internet-connected sensors used throughout its 12 million elevators and uses machine learning to predict when elevators might be about to break down — based on patterns of past use and failure. Doing so allows engineers to be deployed to carry out predictive maintenance, fixing potential issues before they cause a problem, and preventing unnecessary callouts by engineers. The system is capable of cutting elevator downtime in half, according to ThyssenKrupp.

This approach of making predictions from IoT data has applications across many industries, from predictive maintenance of railway tracks, oil pipelines and plane engines, through to adjusting automated crop-tending systems to yield a better harvest.

ai-workplace

Machine vision in the workplace

Machine vision is an area of AI that could allow the automation of swathes of manual roles that until recently would have been considered too complex for a computer system to handle.

A case is point is Amazon Go, a grocery store where shoppers just pick up what they want and walk out of the shop with their goods. The system works by using more than 100 cameras dotted throughout the store to track what each shopper picks up. It is able to assign each good that is picked up an unique tag, allowing the shopper to be charged the correct amount when they leave, via an Amazon app on their smartphone.

These stores will need a max of 10 staff per store according to some reports, an order of magnitude fewer employees than a traditional supermarket.

Cashierless retail stores is clearly only one possible use of machine vision, with potential for the technology to be used to help robots cook food in restaurants, to inspect infrastructure for wear and tear, to help robots in warehouses pick goods from shelves — the list is almost endless.

Robots in the workplace

Robots are nothing new in the workplace, having been a fixture in car manufacturing plants for decades.

What’s different today is that robots are beginning to be used for less repetitive and predictable tasks.

While a robotic assembly arm on a car production line requires the vehicle’s chassis to be in precisely the same place every time it welds on a car part, robots are starting to emerge that can cope with a greater deal of uncertainty in their environment, broadening the tasks they can take on and opening the possibility of working more closely alongside humans.

Amazon again is leading the way in using robots to improve efficiency inside its warehouses. Its knee-high warehouse robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, such as picking items from shelves, so it’s not a given that manual and robotic labor will continue to grow hand in hand.

The Chinese manufacturer Foxconn, which makes Apple iPhones, has been gradually increasing its use of robotics on its production line, reportedly automating 60,000 jobs in 2016.

As robotic manufacturing has advanced, an increasing proportion of manual processes are being automated in Chinese factories, with the Changying Precision Technology Company replacing 90 percent of its human workforce at a factory in Dongguan where it made parts for cell phones.

Breakthroughs in fields such as machine learning are helping augment robots’ abilities and push them into new areas.

This year saw the launch of a burger-flipping robot, which uses machine vision to detect where burgers sit on the griddle. While the robot, dubbed Flippy, can reportedly cook 2,000 burgers a day, it was taken offline for upgrades after just one day after a surge in interest left it unable to keep up with demand.

The Forrester report The CIO’s Guide To Automation, AI, And Robotics talks of an emerging category of “cobots”, robots that can safely collaborate with people to work on tasks, which are now being made by the likes of startups like Rethink Robotics through to established players like FANUC.

Another relatively new type of robot are customer service bots, which pack a chatbot capable of understanding speech and language into a robot that often sports a friendly humanoid appearance.

Forrester’s report references SoftBank’s Pepper and LoweBot from Lowes as examples, and sees them as an evolution of customer-service terminals and kiosks used by McDonald’s and Delta Airlines for some time.

“In a retail context, CIOs can use these robots to provide customer recommendations, to run replenishment and restocking efforts, and to relieve humans of repetitive tasks,” it states.

Robotic Process Automation

Back office tasks like data entry, accounting, human resources and supply-chain management are full of repetitive and semi-predictable tasks — collecting data and shunting it between different systems — that are ripe for automation.

During Robotic Process Automation (RPA), software is used to capture the rules that govern how people process transactions, manipulate data and send data to and from computer systems, in an attempt to then use those rules to build an automated platform that can perform those roles.

The Forrester report The CIO’s Guide To Automation, AI, And Robotics found that RPA needn’t eliminate human jobs: “A dairy company eliminated eight percent of the positions in one department when it implemented RPA for financial reconciliations — but it told Forrester that the remaining human employees were far happier with the more complex, interesting work they were doing while leaving the repetitive tasks to the bots.”

As with other shifts to AI-driven automation, however, existing processes can’t simply be switched from manual to automated at a flick of a switch, and will generally need to be reengineered to suit the automated system in order to optimize the benefits.

Autonomous vehicles and AI

While autonomous vehicles are not yet a daily sight, and fully fledged self-driving trucks are likely a few years away, smaller delivery robots are starting to increasingly be developed that hint at a near-future revolution in how small goods are transported.

Starship Technologies has piloted using its autonomous, wheeled bots to deliver food in Europe, navigating their own way along pavements to their destination. Similarly, Domino’s has developed its own Domino’s Robotic Unit, an autonomous pizza delivery bot designed to trundle along pavements, and Piaggio Fast Forward’s Gita robot, a wheeled ball that can carry up to 44 pounds and follow a human is now available.

Meanwhile both Amazon and Google are engaged in separate drone-delivery projects, with a view to using automated drones to reduce delivery times for retail goods to 30 minutes or less on a 24×7 basis.

These projects are still at a relatively early stage, and delivery drones and bots are virtually never used today, although the number of pilot projects suggests they may be an everyday sight within a few years’ time.

AI and translation and transcription

Real-time translation and speech recognition has been the holy grail of AI research for a long time, and while automated systems lag behind human performance in these areas, it’s got an awful lot better in recent years.

With accuracy for transcription in the high 90s, and translation good enough to be at least understandable in most instances, business-focused products are starting to emerge that feature automated translation and transcriptions for meetings.

One example is the Ricoh Cognitive Whiteboard, which can use transcription services provided by IBM Watson to take notes from meetings and, where there is cross-country collaboration between remote teams, can translate that meeting in real-time, as you can see being demoed here at the IBM Watson IoT Center in Germany.

AI and sentiment analysis on social media

As machine learning has delivered breakthroughs in natural-language processing, so the use of AI-assisted systems to analyze customer sentiment on social media has increased — with offerings available from IBM, Microsoft and other major cloud platform providers.

However, while the systems can flag broad sentiment among a firm’s customer base, enterprises are quick to highlight their limitations — such as the struggles they have with identifying context and how it can change meaning.

Troy Janisch, who leads the social insights team at US Bank told last year’s Sentiment Analytics Symposium that human intuition is still needed to make sense of the context.

AI, augmented and virtual reality

Eventually it seems inevitable that AI will play a key role in augmented reality headsets, head-mounted displays which mix digital information into the wearer’s view, for instance showing a hotel concierge the likes and dislikes of guest as they approach.

Microsoft has already announced that the second generation of its Hololens headset, rumored to be released in 2019, will incorporate a dedicated chip for accelerating AI-related tasks like object recognition.

Although Hololens is still at an early stage, a number of companies are experimenting with using headsets in various roles, for instance, BAE Systems is using the Hololens to teach workers how to assemble a battery for its green HybriDrive buses.

What are the advantages of using AI tech in the workplace?

The Forrester report The CIO’s Guide To Automation, AI, And Robotics describes automation technologies as offering CIOs “numerous opportunities to magnify the impact of their digital transformation efforts”, citing improvements in terms of scale, speed, personalization of services, division of labor, quality and security.

Meanwhile analyst house Gartner, in its report Use Digital Workplace Programs to Augment, Not Replace, Humans With AI, sees the potential for “AI technologies” to act “as a complement to human skills” and to “drastically improve the quality of decision making and process efficiency”.

How can businesses implement AI in the workplace?

For businesses looking to take advantage of the AI-related technologies listed above, analyst house Ovum recommends, in its report 2018 Trends to Watch: Machine Intelligence, that enterprises “should aim to build open systems within the AI technology stack” to ensure ongoing compatibility as new technologies emerge.

For those who want to build their own AI service, there are plenty of cloud-based infrastructure tailored to training and running machine-learning models. However, there are also an increasing number of off-the-shelf, industry-focused AI-powered platforms being released by the likes of IBM and others, alongside companies that will work with firms to implement AI and automation, such as IPSoft.

Ovum advises that “working with experts in the field will be much easier than building an internal AI capability, but large enterprises will reap long-term benefits from establishing some degree of internal expertise, particularly in sourcing and managing data, identifying business use cases, and managing the AI development process.”

Mixed-Reality-and-Artificial-Intelligence

Will humans be able to work alongside AI and will AI destroy jobs?

While there have been isolated incidents of automation and robotics replacing roles en-masse, the ultimate impact of the AI revolution on the workplace is less certain at present. While some worry about widespread job destruction, for instance self-driving vehicles supplanting millions of truckers worldwide, others say there will be more value in AI augmenting the abilities of existing workers. Of course, even this scenario doesn’t preclude job losses, as a single AI-augmented worker may be able to perform roles that used to require multiple people.

Forrester predicts that automation will cannibalize 17 percent of all jobs in the US economy by 2027, although this will be offset by the creation of jobs equivalent to 10 percent of the workforce.

MIT economist Erik Brynjolfsson, argues that society needs to be prepared for changes that may occur, to prepare for the ramifications of a scenario where automation upends industries that currently employ large numbers of people.

“It’s not that the overall demand for labour falls, so much that the demand for certain types of skills fall and demand for other skills increase and if we don’t have a good match in the economy, and if we don’t think about it and develop our institutions correctly, then you’re going to have losers as well as winners,” he told ZDNet.

The argument is not necessarily that AI will destroy more roles than it creates, rather that those displaced may not have the skills, temperament or opportunity to move into the new roles created, which could have negative consequences for both individuals and societies. This mismatch may be particularly challenging for societies as automation accelerates the rate at which jobs are transformed and destroyed.

Which companies are at the forefront in developing AI and what are they doing?

Unsurprisingly the biggest tech companies in the world are also at the forefront of AI research.

Companies like Google, Microsoft, Amazon and Facebook are leading the way — with Google, Microsoft and Amazon’s cloud platforms offering both the raw infrastructure and on-demand AI-powered services such as speech, language, emotion and vision recognition, as well as services for building chatbots and deriving information from IoT data. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific, AI-related services aimed at everything from healthcare to retail.

The Robotic Process Automation field also sports a wide range of specialist companies, including Automation Anywhere, Blue Prism, Contextor, EdgeVerve Systems, Kofax, Kryon Systems, NICE, Pegasystems, Redwood Software, Softomotive, UiPath, and WorkFusion.

Outside of the US, Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI, in fields ranging from e-commerce to autonomous driving. As a country China is pursuing a nationwide plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020. There is also a burgeoning AI start-up scene in the country, with facial recognition firm SenseTime recently valued at $4.5bn, based on its rapidly growing number of clients, whose ranks include Chinese city governments that use facial recognition software to identify people and detect suspicious behavior.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China’s favor.

Source: ZDNet

TAGS:
 
 

    Popular posts

    Related posts