What If Your Agents Couldn't Come In Tomorrow?

Rising fuel costs and potential work-from-home mandates are forcing contact center leaders to confront a critical question: are you ready if agents can't come in tomorrow?

AI Key Takeaway

This post examines the operational risks contact centers face as rising fuel costs and potential work-from-home mandates make remote agent work increasingly likely. It explores how network contention, unreliable hardware, and poor audio quality degrade both agent performance and customer trust—citing MOS score data linking quality issues to higher attrition. The authors argue that proactive CX observability, proper baseline testing under real load, and agent-supportive tooling are essential to building a resilient distributed workforce before conditions force the transition.

What If Your Agents Couldn't Come In Tomorrow?

My colleague Matt Bangy and I have been having a lot of good conversations lately. The kind that start in one place and end up somewhere neither of us expected. This one started with fuel.

The global fuel situation has been building pressure in ways that are starting to show up in policy decisions. Governments are discussing work from home mandates. Here in Australia, petrol stations have been running dry and the government announced free public transport to ease some of the strain on commuters. Even where mandates haven't landed, the cost of getting to work is already shifting behaviour. People are doing the maths every morning before they even reach for their keys.

Matt and I landed on the same question fairly quickly. What if your agents couldn't come in tomorrow? Not as a thought experiment. As a real operational question that contact center leaders should be asking themselves right now, while there is still time to do something about the answer.

We have been here before, and the track record is not particularly flattering. COVID forced the industry into distributed work without warning and most organizations duct-taped it together and called it a strategy. Agents went home with whatever gear they had. Ironing boards became desks. Headsets were a lucky dip. Calls degraded, customers noticed, and the moment restrictions lifted, a lot of organizations exhaled and moved on without fixing the underlying infrastructure. Matt put it plainly: some people are doing hybrid really well, some people have just taken their eye off the prize and moved their attention toward the next shiny thing.

What feels different this time is that we can see it coming. There is runway. The question is whether we use it.

Matt comes at this from a deeply technical place, which is exactly why I wanted to have the conversation with him. When I asked what consistently goes wrong when agents work from home, he broke it into two categories.

The first is the network. Home connections are contended by nature. The kids are streaming, the neighbour is doing something mysterious with bandwidth, WiFi is unpredictable under load. Some agents are on Starlink or mobile hotspots because that is genuinely their best available option. Unlike a managed office environment, none of this is within the organisation's control, and standard monitoring tools were never built to see it clearly.

The second category is everything else. Hardware that performs fine in the office can behave very differently at home. Software patches get pushed at inconvenient moments. Matt described a customer where the degraded performance was actually happening in the office, not at home. The desktop team had been using agents' in-office days to push firmware upgrades and updates during working hours, without coordinating with the contact center or network teams. Agents would sit down, start taking calls, and their machines were mid-update. High CPU, sluggish screens, degraded audio. It is a bit like flowing traffic when someone jams on the brakes. The delay ripples backwards for kilometres, and nobody at the back of the queue knows why they stopped. By the time it shows up in the quality data, the cause has long since moved on.

Beyond the technical layer, there is the isolation. An agent in the office can tap someone on the shoulder. They have ambient awareness of the team around them. At home, they are on an island. When something goes wrong, they have no easy way to know if it is their router, their headset, the platform, or something buried deeper in the stack. So they start troubleshooting in the way any non-technical person does: swap the cable, reboot, check the jacks, file a helpdesk ticket. All while a customer is waiting.

This is where the conversation got really interesting, because Matt and I kept arriving at the same place from different directions. Not all bad agent behaviour is a DNA problem. A lot of it is a desktop problem.

When audio degrades during a call, cognitive load climbs fast. The mental bandwidth that should be going toward the customer's problem, toward empathy and reading tone and finding the right resolution, gets consumed by the technical friction instead. The call suffers. The quality score suffers. The agent sees that score and knows they were trying their hardest, and the number says otherwise, with no visible explanation.

We have data on this that genuinely stopped me when I first saw it. Agents with consistently poor MOS scores, Mean Opinion Score, the long-standing measure of voice quality built from latency, jitter and packet loss, were fifty percent more likely to leave within the next three months. When we traced the chain, it led from high CPU utilisation through to transcription loss, which was then flowing through to quality scores. Agents with around thirty percent of their reviews flagged as unsatisfactory were heading out the door. Not because they had stopped caring. Because they were fighting a system that was stacking the deck against them and nobody could see it.

MOS has been around for over fifteen years. Most of us, myself included when I was running contact centres, treated it as background noise. A network team metric. When you connect it to attrition, it starts looking like an early warning system that the industry has been walking past for years.

There is also research from the University of Southern California and the Australian National University on the relationship between audio quality and trust. When a voice sounds degraded, listeners unconsciously reduce their confidence in what is being said. The content does not change. The trust shifts anyway. For a contact centre, where the entire value of an interaction depends on a customer feeling heard, that is not a minor variable.

So what does better actually look like?

Matt's view, and I think he is right, is that it starts with arming agents properly. Not in a vague, aspirational sense, but concretely. An agent working from home needs to know their environment is being monitored, not in a surveillance sense, but in the sense that the platform has their back. A proactive notification that surfaces degrading audio quality before the customer feels it is the difference between a confident agent and one who is already rattled before the conversation begins. That is what a well-implemented observability and agent copilot experience actually does. It is not a tool for tool's sake. It is the functional equivalent of having someone nearby when things go sideways.

Beyond that, every organisation operating any kind of hybrid model needs a baseline. If you do not have a clear before-and-after picture of how agents perform at home versus in the office, you are making decisions without enough information. Capture that data at the individual agent level. Aggregate it by location. Compare the two modes honestly, because it might not be the behavioural gap you expect. It might be network, hardware, or software carrying more variance than anyone assumed.

Then test it under real load. A pilot group of a handful of agents does not stress test anything meaningful. It confirms that a handful of agents with decent setups can make calls. What you actually need to know is how the full weight of a distributed workforce lands on your infrastructure, your support model and your escalation paths before the conditions force your hand.

Matt made a point near the end of our conversation that I keep coming back to. Get this right and it becomes a differentiator in hiring. An agent who has a reliable setup, tools that work in their favor, and quality scores that reflect their actual performance rather than their ISP, is an agent who tells other people about the place they work. Glassdoor reviews reflect operational reality. Word travels in this industry. Being known as a contact center where the technology supports people rather than undermining them is worth more in the talent market than most organizations realize.

The baseline should never just be do no harm. That is a low bar dressed up as a standard. There is too much going on outside the contact centre walls for the organisation to also be a source of friction inside them. The goal is to create an environment where an agent can sit down, log in and do the job they were hired to do, without becoming their own first-line IT support in the process.

We have been given a warning this time. We have runway. The question is simply whether we use it or repeat the mistakes of our recent history.

Until then, and as always, hooroo.

Check out the video of our chat on YouTube and if you want to explore your WFH readiness then why not Get in touch with the team at Operata

FAQs

Why should contact center leaders prepare for remote work now?

Rising fuel costs and potential work-from-home mandates mean agents may not be able to commute to the office. The lessons of COVID showed that unprepared transitions to remote work lead to poor customer experiences, agent burnout, and attrition. Acting now while there is still runway gives organisations time to build the infrastructure and processes needed to support a distributed workforce.

How does poor audio quality affect agent performance and customer trust?

When audio degrades, agents spend their mental bandwidth fighting technical friction instead of focusing on the customer. Research shows that degraded voice quality unconsciously reduces listener trust, regardless of what is being said. Data also links consistently poor MOS scores to significantly higher agent attrition, as agents leave when the system works against them.

What role does CX observability play in supporting remote contact center agents?

CX observability gives remote agents and their organizations real-time visibility into the technical environment, from network quality to hardware performance. It proactively surfaces issues like degrading audio before customers feel them.

GET YOUR FULL COPY OF THE WHITEPAPER

Thanks — keep an eye on your inbox for your whitepaper!
Oops! Something went wrong. Please fill in the required fields and try again.
Luke Jamieson
Article by 
Luke Jamieson
Published 
April 2, 2026
, in 
CX
Get Started

Ready to bring CX Observability to your contact center?

See how Operata empowers IT and Ops teams to maintain a truly connected customer experience in today's complex cloud contact center ecosystem.

Book a Demo