AiCore logo

Lesson 3 — Wellbeing, Sustainable Practice, and Your Stakeholder

Module 2, Unit 1 | Lesson 3 of 3

By the end of this lesson, you will be able to:

  • Describe what wellbeing means in the context of AI adoption, including dimensions beyond stress management
  • Explain how to raise an automation proposal in a way that respects colleagues' concerns and invites honest feedback
  • Describe the environmental costs of AI systems and identify what a sustainable mindset means for an AI practitioner in practical terms
  • Explain why trust is the long-term precondition for successful AI deployment, and how it is built over time

Wellbeing beyond the obvious

K26 — the apprenticeship standard's requirement that you understand the benefits of wellbeing and safe working practices — is sometimes read narrowly, as if wellbeing were a synonym for stress management or mental health support. Those are important, but they are not the whole picture.

In the context of AI adoption, wellbeing encompasses a broader set of dimensions. Job satisfaction — the degree to which a person finds their work meaningful, varied, and within their control — is a wellbeing concern. A colleague whose role is gradually stripped of the parts they found most engaging, without any corresponding gain in the work that replaces it, may be technically fine but is experiencing a real loss. Sense of purpose matters: people whose work involves direct human contact, craft, or visible impact often find that automating routine tasks removes something they valued without them having articulated that value before it went. Autonomy matters: the ability to make judgements, exercise discretion, and take responsibility for one's own work is a source of professional dignity, and its reduction can be experienced as deskilling regardless of whether the residual tasks are objectively more complex. Skill development matters too: if an automated system removes the practice that kept a skill sharp, people may notice its absence only when they need to rely on that skill in an exceptional case.

None of this means you should not automate. It means you should design automation with these dimensions in mind, and not assume that removing the "tedious" parts of someone's work is an unambiguous gift.

💬 Reflection

Think about the process you are planning to automate. If a colleague has been doing this task for two or three years, what might they have gained from it that is not obvious from the outside? Procedural knowledge about edge cases? A sense of ownership over a part of the workflow? A relationship with another team built through the back-and-forth of that process? These are the things worth asking about before you design them out of the picture.


Raising proposals in a way that respects people

How you introduce an automation proposal to colleagues matters as much as what you propose. There is a version of this conversation that creates anxiety and resistance from the outset, and a version that earns genuine engagement. The difference is mostly about framing and timing.

Starting with the technology is almost always the wrong move. "I've been looking at an AI tool that could automate this" immediately raises questions about job security, changes to workload, and who decided this without consulting anyone. Starting with the problem is usually better: "I've noticed this step takes a significant amount of time each week — does that match your experience?" That question invites a colleague into a shared diagnosis rather than presenting them with a solution they did not ask for.

Being honest about what you know and do not know is equally important. If you are in an exploratory stage, say so. If you are presenting a proposal that has already been approved, say that too. Colleagues who find out later that a decision was presented as exploratory when it was already made feel misled, and that feeling is difficult to recover from. The trust you need for good adoption is built or lost in these early conversations.

The practical ask also matters. If you want to interview a colleague about their experience of a process, frame it accurately: you are trying to understand what the process involves and what concerns they would have, not seeking their approval or delivering a decision. That framing is more respectful and also more likely to produce honest answers.

Finally, follow up. If a colleague raises a concern in a conversation and nothing changes and no one refers to it again, the implicit message is that their input was collected but not considered. Even a brief acknowledgement — "I raised the data access concern you mentioned with the IT team and here is what they said" — demonstrates that the conversation had a genuine effect.


Sustainable practice: the environmental cost of AI

B5 in the apprenticeship standard asks you to take a sustainable mindset in digital activities and to consider climate change and the move to net carbon zero in your work. For an AI practitioner, this has a specific application that deserves direct attention.

AI systems consume significant amounts of energy. Training large language models — the kind that underpin the AI assistants used in automation workflows — requires enormous computational resources, with associated energy and carbon costs. Training is a one-off event, however. The more directly relevant cost for a practitioner building automation workflows is the cost of inference: each time you send a prompt to a large language model, processing happens on servers in a data centre, and that processing draws power.

At the scale of one API call, the cost is negligible. At the scale of a workflow running thousands of times per day across an organisation, it accumulates. Responsible practitioners do not ignore this.

The practical implication is not that you should avoid AI. It is that you should choose the right tool for the task. A large, general-purpose language model is exceptionally capable — and exceptionally expensive to run compared to a smaller, more specialised model that might handle your specific task just as well. If your automation requires classifying an email into one of five categories, a purpose-built classification model will almost always be more efficient than prompting a frontier LLM to do the same thing. If your task requires generating nuanced, contextually appropriate prose, the larger model may genuinely be the right choice.

The question to ask is: what is the minimum level of capability required to do this task reliably? That question serves both sustainability goals and cost efficiency. It is also good engineering practice — oversized tools introduce unnecessary latency, cost, and failure modes alongside their unnecessary carbon footprint.

💬 Reflection

For your proposed automation: what AI capability does the core task genuinely require? Classification? Text generation? Extraction of structured data from unstructured input? Is the tool you were planning to use the right-sized option for that specific task, or are you defaulting to the most familiar or most capable tool available? These are questions worth putting to your coach in your next session.


Building trust over time

The final theme of this unit is trust — because trust is the variable that ultimately determines whether an AI deployment succeeds in the long term.

Technical success is necessary but not sufficient. A system that works accurately, runs reliably, and integrates cleanly with the existing workflow can still fail if the people working alongside it do not trust its outputs. A system that people do not trust will be worked around: the manual process will continue in parallel, colleagues will re-check every output, and the time savings you projected will not materialise.

Trust in an AI system is built through consistent, observable performance over time. It is not built through assertions — "the system is highly accurate" — but through experience: colleagues seeing the output, checking it themselves, finding it correct, and over time developing a calibrated sense of when to scrutinise it more closely and when it can be relied upon. This means that early deployment should create the conditions for that calibration. Make outputs visible. Provide channels for colleagues to flag errors without friction. Review those flags and communicate what you did about them. Be transparent about the system's known limitations and the types of input it handles less well.

Trust also depends on what happens when the system fails. Every AI system will produce incorrect outputs at some point. The question is whether people experience that failure as one they could have predicted and handled — because the exception path was clear and the human review step caught it — or as a surprise they were not equipped to manage. The former builds trust. The latter destroys it.


A colleague has been manually processing customer onboarding forms for three years. After an automation is deployed that handles the data extraction step, they report feeling like "the job isn't mine anymore." Which dimension of wellbeing does this most directly reflect?

A practitioner is building an automation that will classify customer enquiries into one of six predefined categories. They are considering using a frontier large language model API because it is the tool they know best. Which of the following most accurately describes the sustainable mindset perspective on this decision?

An AI practitioner deploys an automation and six weeks later notices that the manual verification process they designed as a backstop is never used — colleagues simply accept every output. Which of the following is the most appropriate response?


📝 Activity 2 — The Stakeholder Wellbeing Interview

Estimated time: 20–30 minutes for the conversation, plus 30 minutes to write up

Open up your Module 2 Unit 1 Workbook

Complete before your next 1:1 and share with your instructor in advance

Identify one person in your team or organisation whose role would be directly or indirectly affected by the process you are planning to automate. This could be someone who currently performs the process, someone who uses its outputs, or your line manager. Arrange a 20–30 minute conversation with them. You are not pitching your automation idea — you are listening and learning.

Before the conversation, prepare using these five questions. Adapt the wording to suit your context and the relationship.

  1. Can you describe how you currently do this process — what does a typical instance look like for you?
  2. What do you find most time-consuming or frustrating about it?
  3. If this process were partly or fully automated, what would your main concern be?
  4. What parts of this process do you think really need a human involved — and why?
  5. If the time you spent on this were freed up, what would you rather be doing?

After the conversation, write a Stakeholder Wellbeing Summary (one page maximum) covering:

  • What you learned about the human side of this process that you did not already know
  • What concerns or anxieties were raised, and how significant they appeared to the person you spoke to
  • What this means for how you design your automation — specifically, what you would do differently based on this conversation
  • Any follow-up you committed to during the conversation

This document becomes part of your portfolio evidence. Bring it to your coaching session.


Activity 2 — Alternative route (for learners unable to conduct an interview)

If conducting a stakeholder interview is not feasible at this stage — for example, you work in a very small team where a conversation about automation could create unnecessary concern before a proposal is ready — complete the following instead.

Find two publicly available accounts of AI or automation implementation from a similar sector or context to yours. These could be case studies, news articles, employee accounts, or published reports. For each account, identify: how the automation affected the people doing the process; what concerns employees raised; how or whether those concerns were addressed; and what you would do differently based on what you read.

Then apply the same five questions from the primary activity to an imagined colleague in your context, answering on their behalf based on your knowledge of the role and your research. Write a Stakeholder Perspective Analysis (one page maximum) following the same structure as the Wellbeing Summary above. Note clearly that this is an analytical exercise rather than a direct interview, and explain why you chose this route.


📝 Activity 3 — Sustainable AI Consideration

Estimated time: 20 minutes Add this as a section to your Stakeholder Wellbeing Summary document from Activity 1

Consider the AI component or components in your proposed workflow. Research the approximate compute and energy profile of the type of AI model you are planning to use — for example, a large language model API, a standard classification model, or a computer vision model. This does not require precision: a directional understanding is the goal.

Write two to three sentences addressing: what the environmental cost of your solution looks like at scale (assuming it runs at full operational volume, whether that is hundreds or thousands of instances per day), and whether there is a lower-resource alternative that would handle the same task reliably enough for your purposes.

If you are unsure what model type your chosen tool uses, that is a reasonable starting point: find out. Your coach can help if needed.


KSB coverage — Unit 1

KSBDescriptionWhere evidenced
K3Social and economic impacts of AI on roles, particularly non-technical staffLesson 1; Activity 1
K15Principles of human oversight and human-AI collaborationLesson 2; Activity 2
K26Benefits of wellbeing and safe working practicesLesson 3; Activity 1
B1Work independently and take responsibility for a productive, professional working environmentActivity 1 (independent stakeholder research and interview); Activity 3
B2Adapt to changing circumstances, being flexible and responding proactivelyActivity 1 — specifically the conversation itself and any follow-up actions
B5Sustainable mindset in digital activitiesLesson 3; Activity 3

⏭️ Up next — Unit 2: With your stakeholder perspective gathered and your initial oversight checkpoints mapped, Unit 2 turns to the legal and regulatory frameworks that govern what you are permitted to do with the data and the people involved — beginning with UK GDPR and data protection.