Mark Ralls is the CEO of ActivTrak, the leader in workforce analytics and productivity management.
The unprecedented rise of AI-powered chat tools like ChatGPT and Bing is astonishing and set to upend many aspects of our lives. None other than Bill Gates hailed generative AI (GenAI) tools as the most revolutionary demonstration of technology since the introduction of the graphical user interface in 1980.
The potential of chatbots as productivity-enhancing tools raises interesting and challenging questions about the definition of “work” and how we value it: Is an employee who uses AI doing less work? Or are they now doing more work, better and faster? Does that mean you pay them less? Or more? How do you know if they’re using AI tools at all? What are the policies for when they do or don’t?
Brave New World?
There is widespread concern that large language models (LLMs) like ChatGPT are coming for our jobs. But in focusing on worst-case scenarios, we’re missing a more subtle point: What does the adoption of LLMs mean for how we define and understand employee performance and productivity?
It’s easy to identify the jobs that will be affected; the more repeatable, the more automatable. Clerical and other service functions, particularly those that involve responding to customer inquiries, come to mind. These jobs have already been impacted by developments like robotic process automation (RPA). With LLMs, those roles are at even greater risk.
No longer is an RPA bot playing “Mad Libs” in response to a customer inquiry, filling in discrete fields of a pre-created form. An LLM can now craft a human-like response to each question, complete with an appropriate emotional tone. For these repeatable roles, AI will only serve to make them better, faster, easier. The math is pretty simple.
The Role Of Knowledge Workers
The equation gets more complicated with high-level knowledge workers, whose qualities of creativity and judgment make them much harder to quantify and measure.
Few roles epitomize “knowledge work” quite like software developers, where an AI tool called GitHub Copilot launched last summer. Copilot promises to speed software development by drawing context from comments and code as it is written, and offering suggestions for whole lines of code or entire functions. A study by Github found that developers who use the tool complete some tasks in half the time and 60% of users reported feeling more fulfilled with their job because they can focus on more satisfying tasks.
Microsoft is now pushing Copilot into the Office suite, promising further disruption to knowledge workers across a much wider variety of roles and industries. Already several professions, including lawyers and investment bankers, promise to be significantly impacted by the addition of AI tools to established workflows. It is not hard to imagine an AI providing high-quality redlines to a run-of-the-mill commercial agreement or creating a standard financial model and associated PowerPoint presentation.
However, it is almost impossible to imagine any legal or banking client being comfortable moving forward with that output without the thorough review of a human expert. The value of AI tools is heavily dependent on two things: the quality of prompts and search queries, and the review of results—by humans.
The Role Of Leadership Teams
GenAI tools have come of age, and despite the efforts of some, there is likely no putting the genie back in the bottle. The question is, how to take advantage of the genie’s powers while avoiding unintended consequences?
• First, protect your IP: Right now every inquiry helps train these models to deliver better results, which also means it is captured and stored. Sensitive data, whether company secrets or personally identifiable information (PII), should not be shared with LLMs.
• Be careful how it’s used: Raw output from GenAI tools is relatively easy to identify (for now) and thus may strike recipients as insincere. Using a GenAI tool to plumb the entire corpus of human knowledge to craft a great message is a good idea, but as with all GenAI results, it is important that a human reviews, edits and approves the final copy.
• Take a balanced approach: Like much in life, moderation is key. Almost all employees will improve their productivity by leveraging GenAI in some aspect of their work, while none should turn their job over to it. Train your employees when and where to use GenAI to improve their personal productivity and follow up to check proper usage.
• Experiment, iterate, learn, share: We are in uncharted territory. Create an open dialog within your team or company on great uses of GenAI to share and reinforce learnings, while providing corrective nudges to limit use cases that are out of bounds.
• Measure the impact: As employees adopt LLMs, some teams may get more done with fewer people. This creates opportunities to increase output, reassign staff to other activities or even explore novel ideas like the four-day work week.
Surviving In The Age Of AI
AI tools augment employees’ own expertise and insight, but human expertise and judgment will ultimately ensure that AI results are meaningful, accurate and relevant to the task at hand.
The employee—whether executive, manager, developer or customer service rep—who can better guide an AI tool will get better results and create more value, faster. Human judgment has not been replaced. In fact, it will be even more valuable, and available, now that people can spend less time “doing” and more time “deciding.”
Finally, it is important to note that these AI models are trained on large data sets of human-generated data. They are unlikely to be able to invent, in the purest sense of the word. That too will remain a human endeavor.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
In the current age of rapid advancements in artificial intelligence (AI) and automation, the very meaning of ‘work’ is being redefined. AI has made it so that physical labor is becoming less common and manual operations are more easily automated. Machines are increasingly capable of carrying out complex tasks which had previously required human judgement and skill.
At a basic level, the term ‘work’ can be defined as the activity of utilizing a person’s resources to produce something of value. As AI is increasingly utilized to automate mundane, repetitive tasks, the focus of human activity is shifting from simple, cyclical duties to activities which require higher-level thinking, judgement and creativity. These more complex tasks, that require a human touch, are becoming more in-demand and are creating unprecedented opportunities for those who are able to work with AI.
While AI is increasingly taking over many of the mundane, manual jobs of the past, there is an increasing need for skilled professionals with an understanding of how to utilise the latest AI technologies. This presents a unique opportunity for individuals or businesses to gain a competitive edge and adapt to the changes around them.
The current wave of automation and AI will not only require humanities new ways of thinking, but it will also require employers to think differently about their hiring practices. AI will offer employees more freedom and flexibility, allowing them to create new types of employment that are well-suited to their own individual skills and interests.
In the age of AI, ‘work’ takes on a whole new meaning. Rather than simply utilizing machines to perform manual rote tasks, the focus has shifted towards activities that require judgement and creativity, allowing workers the freedom to create their own type of work. With the advent of AI, more human activities are being augmented and automated, providing a unique opportunity for businesses and individuals alike to use AI to stay ahead of the competition. As the world continues to become more dependent on AI, it is important for everyone to understand what working in the age of AI means, and how AI can be used to create new opportunities.