How To Deal With The Pressure To Deploy AI

Photo of Kuba Filipowski

Kuba Filipowski

Updated Apr 2, 2024 • 16 min read
Chessboard as a metaphor for strategic AI adoption decisions

The hype is huge, but at the scale of individual companies, the AI revolution will need concerted effort from leadership and employees alike to deliver on its promise.

In the race towards an AI-powered future, one key obstacle we must not forget about is the law of inertia. As industry expert Ethan Mollick has repeatedly stated, “even if there is no development beyond the current level of AI, we have at least a decade of absorbing the effects of ChatGPT on our lives and work”.

If 10 years seems like too long, consider the pace of cloud adoption. The first AWS service launched in 2006, but it took the next 3 years for enterprise cloud infrastructure spending to exceed $1 billion.

Fast forward to 2022, and that number has reached $225 billion, with 94% of technology leaders being “mostly cloud” in some form. And yet, Gartner predicts spending will keep growing by 20% every year through 2026.

With AI, many will race ahead, unencumbered by legacy technology. For others, the rational but unexciting solution to the challenge of AI deployment will be to wait for it to get cheaper, more reliable, and easier to integrate. But even if you’re not able to deploy a groundbreaking system ASAP, there are still plenty of things your company can do to gain an edge in the AI arms race.

Getting ready for the AI revolution

According to the Cisco AI Readiness Index, which surveyed over 8,000 leaders around the world, there’s strong pressure to implement AI and almost unanimous understanding that it will have a huge impact on business. However, 86% of organizations are not prepared to use AI to its full potential.

The index outlines six pillars of AI deployment: strategy, infrastructure, data, governance, culture, and talent. These are the key ingredients to implement AI effectively. When it comes to the last three, they will take time.

As AI spreads and people start seeing more real-world examples of it generating value, the culture will gradually warm up to them. From what I’ve learned talking to Netguru’s clients, the first step towards AI is often the deployment of ChatGPT or GitHub Copilot, so these two products are doing a lot of heavy lifting for the industry.

At the recent OpenAI DevDay, Sam Altman disclosed that OpenAI’s flagship product is used by 100 million people every week.

Over 2 million developers across 92% of Fortune 500 companies are using their models via API. Impressive, but still tiny considering the broad range of applications for these products. Talk to a few people and you’ll quickly realize most haven’t even used ChatGPT Plus yet, which offers the industry’s most powerful and most convincing GPT-4 model with language, vision, image, coding, and data analysis capabilities.

Governance is perhaps the trickiest part at the moment, as the world’s governments and C-suites are struggling to determine how AI should be regulated.

The Hollywood elites may have reached a consensus for now, but most of us outside of the dream factory have to improvise while we wait for legal frameworks to be established so that we can act in compliance with them. For now, the biggest concern for companies is to avoid leaking sensitive data and intellectual property into third-party AI systems.

Hiring in tech remains a challenge. For AI roles, the disclosed salaries reach up to $900,000, and talent is the hardest to find. Good news for people looking to reskill into a new, well-paid job. Bad news for companies pressured to deploy AI right here, right now.

Culture, governance, and talent will take a while to catch up to the needs of the market. In the meantime, managers can focus on getting the other three pillars of AI deployment up to the speed of the AI age – strategy, infrastructure, and data.

Strategy

Being the engine that makes everything else tick, the right strategy can enable you to make leaps while others take tiny steps. This is clearly visible even among the highest echelons of Big Tech. While the agile OpenAI continues to deliver breakthroughs with Microsoft’s backing, the equally mighty Google can’t seem to keep up, half-releasing their Gemini model almost a year after GPT-4.

At Netguru, the strategy we’ve been advocating for years remains the same for the AI era. I’m talking about digital acceleration, which is best characterized by:

  • Speed-to-value – short innovation and iteration cycles that ensure agility
  • Speed of decision-making – quick moves and openness to change
  • Speed of delivery – incremental delivery over weeks or months instead of years
  • Speed of adoption – rapid time-to-market and quick customer feedback loops that accelerate product-market fit
  • Change of mindset – owning the change instead of following competitors
  • Change horizon – continuous evolution
  • Approach to change – simultaneous and continuous decision-making, discovery, and delivery
  • Model of change – surgical approach to innovation
  • Source of change – inspired by consumer and client needs

At a time when you can give ChatGPT an image of an interface and it’ll give you back the code to implement it, unlearning old ways of doing things to make room for the new becomes an important skill.

Just like writers don’t need to struggle to create a first draft anymore, and programmers don’t need to manually write boilerplate code, managers and leaders now have many new ways of accelerating their own processes thanks to our new AI-powered tools.

Infrastructure

Every major cloud provider now offers services that will enable you to run AI models, fine-tune them, or even train your own foundation models if you’re up for it.

When it comes to enterprises, the innovator’s dilemma is very much at play here. They may want to go all-in on AI and build data centers full of GPU racks, but laggards might turn out to be winners here. There is an ongoing, vast effort in the industry to optimize model performance.

A few trends will collide in the near future to make AI much cheaper to run than it is now:

  • Models are getting smaller while maintaining the performance of their bigger cousins. A tiny, fine-tuned CodeLlama model is better at coding than GPT-4. As we move on, we’ll see plenty of small models overcome the performance of large foundation models in specialized use cases.
  • The AI hardware race is in full effect, with all companies from Google to Microsoft developing their own chips, while the industry leader Nvidia keeps developing new tech to make AI processing more effective.
  • Many signs point to 2024 being the year of on-device AI processing, as even now you can run Llama models on 2022 M2 Macbooks with decent performance, and the image generation model Stable Diffusion on a well-equipped gaming PC – or on your iPhone, as our R&D team has shown.

Ultimately, expanding proprietary AI compute with racks of GPUs only makes sense for companies training their own LLMs from the ground up. For everyone else, using 3rd party APIs and/or open-source models will be the way to go.

New services that make AI deployment easier keep being announced. Just as I’m writing this article, Cloudflare announced new additions to their Workers AI initiative, offering quick access to Stable Diffusion and Code Llama in a serverless way. There are countless others with similar services.

For companies that have a legacy tech stack, the way to go at the moment would be to consider adopting cloud services and a headless approach based on APIs to ensure a smooth data flow across their organization, enabling rapid AI experimentation.

Data

When generative AI surged in popularity, there were lots of discussions about the importance of data. Companies without a treasure chest of unique data were told they would have no moat in a world dominated by AI.

In machine learning, data is essential. Without the right data, you just won’t be able to develop a useful model. As ML practitioners say, “the quality of your data limits what you're going to be able to do with your models."

If you have unique data, you already have an advantage that will allow you to fine-tune foundation models, or even train your own.

Chances are that it’s stored in silos across your organization (like 81% of companies in the aforementioned Cisco index), so it’s high time to consider:

  • a centralized data store,
  • adopting a headless approach and making that data available for AI experiments through APIs,
  • updating data security and privacy policies to prevent leakage into 3rd party AI systems.

But what if you don’t have a huge collection of unique data? Worry not, because you can still drive value with genAI:

  • Both proprietary and open-source foundation models are powerful enough out-of-the-box that generating value with them requires mostly finding the right use case, and designing a solution with a positive cost-benefit ratio.
  • To fine-tune a model for a specific use case doesn’t have to require a huge amount of data. It can take as little as 1000 data points to tune a model towards a specialized use case.
  • Researchers have been achieving increasingly good results with synthetic datasets. Microsoft’s recent breakthrough with Orca 2 shows that you can fine-tune a small model to do complex reasoning with a well-designed synthetic dataset. Another example, Meta achieved great results training an image editing model, Emu Edit, with a synthesized dataset. In fact, researchers have shown that training a model with synthetic images can drive better results than real images.

All in all, the lack of huge, unique datasets doesn’t need to stop you from experimenting with AI. Consider what kind of data you could acquire, collect, or synthesize to fine-tune a foundation model for a specialized use case.

How to start using generative AI if you still haven’t

Talking to Netguru’s clients, I’ve found that these are the most popular first steps towards AI adoption:

  • Acquiring ChatGPT and/or GitHub Copilot licenses for employees
  • Adding a conversational interface to an internal knowledge base
  • Implementing client-facing chatbots to improve customer service or website search
  • Generating anything SEO-related, from product descriptions to landing pages
  • Developing specialized tools to streamline workflows involving complex documents
  • Zero-click personalization, rapidly tailoring content depending on the user

Clearly, three themes emerge:

  • Customer-facing tools can enable your clientele to find help without having to interact with a human agent, or provide an entirely new type of experience.
  • Copilots enable workers to do more in less time, accelerating work but remaining under human supervision.
  • Internal tools can help you generate insights from large amounts of data and quickly identify patterns that humans wouldn’t be able to find.

To get these capabilities, companies of all sizes can leverage leading generative AI models – from OpenAIs huge GPT-4 to the tiny French Mistral, either through packaged services or by building custom solutions.

When it comes to building an MVP, I recommend choosing one API to start. Once you see that the solution has legs, and there’s a clear product-market fit, you can diversify your API suite or find an optimal open-source model that will help you keep the costs down.

At Netguru, we’re all in on AI. Our developers have long been using GitHub Copilot to boost productivity. Marketing and sales are using ChatGPT and Midjourney to create content.

Our team built the Netguru Memory assistant that can generate case studies and offerings, and access historical project information instantly.

Even if you don’t want to deploy AI until you’re 100% confident of ROI and free from risk, you can adopt the Goldman Sachs playbook. Marco Argenti, the company’s CIO has stated that they’re deep into experimentation, but have a very high bar for deployment which has prevented them from releasing any customer-facing AI apps yet. Nonetheless, their software developers empowered by generative tools are already seeing 20-40% productivity gains.

What’s next?

Nvidia’s CEO Jensen Huang predicts that AI will gain reasoning capabilities and start figuring things out by itself in two to three years. Thanks to the generative AI boom started by ChatGPT, the world went from complete disbelief that Artificial General Intelligence could ever be achieved to a three year long timeline for it.

Before that happens, I believe everything will become AI-first. The next iPhone will be a wrapper for AI, your company data will be fine-tuning for AI, and we will be volitional wrappers for AI, telling it what to do and benefitting from the output.

But, coming back down to Earth, that is still in the future. The best we can do at the moment is to prepare for it – and the Netguru team stands ready to help you do that.

Photo of Kuba Filipowski

More posts by this author

Kuba Filipowski

CEO & Co-founder at Netguru
Thinking about implementing AI?  Discover the best way to introduce AI in your company with AI Primer Workshop  Sign up for AI Primer

We're Netguru!

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency
Let's talk business!

Trusted by: