Blog AI/ML Navigating the AI frontier: Lessons from the cutting edge
Published on: September 10, 2024
4 min read

Navigating the AI frontier: Lessons from the cutting edge

Discover key insights on AI development, from rapid prototyping to production, evaluation frameworks, and emerging industry trends.

Duo option 2 - cover

As AI continues to evolve at a breakneck pace, developers and organizations are grappling with how to effectively integrate it into their workflows and products. At GitLab, we're always looking to stay at the forefront of these developments to better serve our community. Recently, our team attended the AI Engineer World's Fair, which provided valuable insights into the current state of AI development. Here's what we learned.

Insights on AI development

The traditional development lifecycle is gone

The advent of AI has dramatically altered the traditional software development lifecycle. While AI prototypes can be created in minutes, advancing them to production takes significantly longer than before. This shift requires a rethink of our development approach, including:

  • streamlining processes where possible
  • adding new steps to accommodate AI-specific requirements
  • embracing rapid iteration and rewriting over multiple cycles

Speed is key

In the fast-paced world of AI, speed of iteration is crucial. Some key takeaways:

  • If productization takes more than three months, the product may be outdated by the time it's ready.
  • Aim to try more things faster than competitors.
  • Reduce iteration time and accelerate collaboration.
  • Implement systems that de-risk getting things wrong and allow for fearless changes.

AI development requires new, practical methods

AI demands new ways of approaching development, including the following:

  1. Prioritize user experience early
  • Frontload user testing and prepare for evaluations early in the design phase.
  • Validate product needs with the best available model.
  • Consider trade-offs between achievability and value when selecting use cases.
  1. Take an iterative approach
  • Identify your base model (choose the best available).
  • Start your prompt template (involve domain experts and product managers).
  • Identify your data selection strategy.
  • Iterate on components separately and evaluate each step.
  1. Develop prompt engineering best practices
  • Standardize around a single query language.
  • Ask the large language model (LLM) to write code to solve problems rather than solving them directly.
  • Automate what you can to reduce degrees of freedom.
  • Use code for deterministic tasks outside the AI chain.
  • Break problems into smaller, more manageable pieces.
  • Leverage technical writers and domain experts for prompt crafting.

Evaluations need more time

The "Great Eval Problem" has flipped the traditional development timeline, with evaluations now taking a majority of the time. With LLMs, we have replaced the need for sophisticated model development approaches with an API call to a third party. However, we still require a significant amount of time to evaluate the responses. To address this:

  • Incorporate evaluations at every level using multiple techniques.
  • Evaluate different aspects at different stages (local, pre-production, production).
  • Focus on end-to-end evaluations that measure end-user value.
  • Consider user-centric evaluations and let features "die with UX" if necessary.

The customer PoV must be top of mind

Customer-centric considerations should be your focus. Here's how:

  • Maintain developer "flow" and reduce context switching.
  • Provide transparency into AI inputs and outputs.
  • Ease users into natural language interactions.
  • Consider human-in-the-loop approaches for complex tasks.

AI engineer roles are changing, pay attention

As the industry matures, the role of AI engineers is becoming more defined:

  • Requires production experience and product development competency.
  • Requires engaging with various personas (ML engineers, software engineers, domain experts).
  • Demands strong data intuition and the ability to extract meaning from data.

Looking ahead

The AI landscape continues to evolve rapidly. Some trends to watch:

  • unification of prompts across models
  • advancements in evaluation and prompt generation tools
  • the rise of "slop" (unrequested and unreviewed AI-generated content)
  • movement towards inline code completion and autonomous agents
  • improvements in fine-tuning, RAG workflows, and managed agents

As we navigate this exciting and rapidly changing field, it's crucial to stay informed, adapt quickly, and always keep the end user in mind. At GitLab, we're committed to incorporating these insights into our development processes and sharing our learnings with the community.

We encourage our developers and the wider community to explore these concepts further and contribute to the ongoing dialogue around AI development best practices. Together, we can shape the future of AI-driven software development.

Learn more about AI and DevSecOps

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

Find out which plan works best for your team

Learn about pricing

Learn about what GitLab can do for your team

Talk to an expert