top of page
Search

It is 2030, The world has changed: Big Tech has infiltrated every part of peoples’ lives, surveilling their every movement.


Most, if not all, decisions are delegated to AI systems operated by Big Tech. There is no transparency

over how or why those decisions are made, how those systems have been built, or how they function.


The energy needs of manufacturing and operating the datacentres needed to build and run AI

systems have grown exponentially, and the environment is a polluted mess: in their rush for AI

dominance, governments abandoned pledges to protect the natural world and address the climate

crisis.


  • Natural resources of rare metals have been plundered.

  • Freshwater resources are depleted.

  • 80% of global energy and water resources are ploughed into datacentres.

  • There are mountains of e-waste.

  • The promises of tech moguls did not materialise: AI did not save more energy than it consumed, and AGI never emerged from the technological cul-de-sac that now dominates the world.

  • Many jobs have disappeared altogether - particularly in the creative industries, which have been

    hollowed-out by the twin pressures of AI’s insatiable appetite for data and society’s demand for

    disposable filler content.

  • Those in employment work the complex, labour-intensive manual jobs for which it’s not yet

    economically efficient to automate with AI and robots. They live in rented accommodation owned by their employers, paid for using their employer’s unique and non-transferable cryptocurrency. There is no agency, creativity, or intellectual fulfilment: people have become armatures and actuators for faceless, voiceless, bodiless AI systems.



How did this happen?

In 2025, at a crossroads, startups, SMEs, and governments chose to double-down on their

dependency on Big Tech.


Under threat of a cost-of-living crisis, environmental collapse, and the destabilisation of democracy,

people opted to emphasise short-term returns over long-term value - prioritising the needs of Big

Tech businesses over the needs of people and the environment:


  • Investing yet further in datacentres owned and operated by the largest technology firms

  • Making political decisions that favoured the business interests of oligarchs

  • Implementing weak regulations


Four years later, Big Tech changed the deal.

Sidestepping copyright and IP law and assuming the right to mine data stored in and processed by

their datacentres, they took ownership of all the data stored there.


Startups, SMEs, and governments alike were caught on the back foot: deeply embedded in these

proprietary systems, they were unable to pull their data out or migrate their operations in time.

And, from that point onward, Big Tech owned everything.



You may think this is far-fetched fear-mongering - or a rejected script for Doctor Who - but, to some

extent, these things are already happening:


  • Major AI investments in the UK and the USA are throwing resources at Big Tech, ensuring

    supremacy of their vision of AI

  • Spotify has been quietly replacing art with royalty-free muzak to boost profits since 2017

  • In 2024, Adobe demanded access to customers’ data for vague purposes - only clarifying its

  • position in the face of a revolt

  • Companies like OpenAI resist calls for transparency about the data used to train their AI

    systems, and they themselves may not even know

  • Copyright claims against Microsoft’s unilateral scraping of code stored on their popular Github

    platform have been dismissed

  • The quality of jobs created by investment in AI is often poor

  • Big Tech is pushing back against the regulation of AI and is now influencing global politics at the

    highest level in order to aggressively defend their business interests, ambitions, and vision for AI



What can you do about it?

The future doesn’t need to look like this.

We can have responsible development and deployment of AI that doesn’t exploit people and the

environment, that doesn’t cause harm, and which builds long-term value for startups, small businesses,

and the wider economy.


But we have to start working towards it, right now.

Here are some concrete actions you can take - today - to help avoid the worst excesses of

unregulated Big Tech and its vision for AI and start building long-term value in your business:


1. Build your own capability using FOSS AI

Rather than purchasing AI services from Big Tech, who don’t know you or your business, consider

hiring full-time, part-time, or even fractional (retained) experts, and building long-term relationships

with them.


Avoid consuming expensive and wasteful general-purpose AI services from Big Tech behemoths like

AWS and OpenAI when you don’t need them. Develop your own lightweight AI solutions tailored to

your problem, organisation, or industry - and share them with others via platforms like Hugging Face

to build trust.


These may not be based on the latest, shiniest, most powerful technologies, but the performance of

AI models one or two generations behind the curve is more than adequate for specific use-cases.


The competitive edge isn’t in the training data, the model, or anything else technical: it’s in how you

apply a solution - fairly, safely, and without bias - to create value for customers and employees.

They don’t care how the sausage gets made, but they do need to trust that the sausage is safe.


2. Support effective regulation

While the EU has already published its first regulation on AI, the UK has thus far adopted a relatively

permissive non-statutory approach - potentially signalling yet more leniency to come.


Take the time to understand what the new regulations are asking of you, and ask for help (if you need

it) to understand what your business needs to do, right now, in order to make the most of AI.


Support regulation where it is clear and where it protects the rights of individuals and businesses:

participate in the local and national consultations that will shape the next decade of the exploitation

of data, AI, copyright, and IP - but also call out where, from your perspective, regulations are unclear,

unlikely to be ineffective, or overly restrictive.


In most cases, it’s a question of getting the semantics right, and the only way to do that is for

governments to hear from as many people as possible.


3. Develop and deploy AI responsibly

Even while the UK government continues to crawl on AI regulation, individual organisations can act

proactively and independently of HMG to develop and deploy AI solutions according to good practice

principles, including:


  • Explicitly defining the impacts - both positive and negative - that developing or deploying AI for

    a given use-case may be expected to elicit, and measuring against those expectations.

  • Developing and/or deploying high-risk AI use-cases that engender unpredictable behaviour in

    sealed computing environments, and avoiding altogether applications of AI that would violate

    existing laws were their processes executed by a human.

  • Being transparent about the source code, training data, testing data, algorithms, parameters,

    loss functions, energy and water consumption rates, and performance of the AI solutions you

    build and use.

  • Developing and deploying only those applications of AI that are fair, unbiased, and explainable.

  • Taking responsibility for the negative impacts of the solutions we develop and deploy.

  • Considering the upstream supply-chains of the AI systems we build on, and opting not to use

    those that exploit or cause harm.



4. Dream big, act local

Consider all the ways your organisation currently creates value for itself and its stakeholders, and all

the ways it could do so in the future.


  • Imagine new locations, customer segments - anything at all.

  • Ask yourself which of these could be wholly or partially automated and/or assisted by AI

    services.

  • Estimate the impacts of investing in those automations / assistive services and how you might

    measure them.

  • Decide which, if any, have the right balance of impacts for you to decide to invest in them.


…but, if you do decide to invest, also consider working with local partners to deliver the expected

value.


This could be decentralised datacentres outside of Big Tech, or expert consultants and software

engineers in your local business district.


That way, you’re not funnelling resources out of your local economy and into the pockets of Big Tech,

you’re collaborating with and supporting other businesses and working together to build shared longterm value - with the added benefits that they will probably be more competitively priced, physically nearby, be more likely to understand your business and local issues, and just as expert in the their field as anyone at Google consulting on behalf of SMEs.


5. Defend your rights and the rights of others

HMG and other governments are currently consulting on the future of copyright and IP protections

within their jurisdictions. The outcomes will have profound implications not only within those

jurisdictions, but also in how copyrighted materials are transacted and monetised internationally for a

generation or more.


If based in the UK, it’s worth considering your personal risk exposure - and that of your organisation -

to changes in copyright and IP law, and making your voice heard.


Ultimately, changes to copyright protections may influence what we choose to share via the web, and

how we store information in order to protect it from data harvesting and unlicensed exploitation. You

may want to consider proactively changing your policies on what can be published online by your

organisation - or about your organisation - and the level of security and protection you are affording

your data.



The future of AI doesn’t have to be defined by Big Tech

By prioritising responsible, transparent development, supporting local businesses rather than Big

Tech, and advocating for fair and effective regulation, we can all wield the power and agency to

ensure that AI serves people, and not the other way around.


The choice to build a sustainable, ethical future for AI is ours, and it starts today.



Post Credit

Alex Leathard (he/him)

Independent Data & AI Expert

 
 
 

Komentáre


bottom of page