Trend Tide News

How To Improve the Developer vs. AI Relationship - DZone


How To Improve the Developer vs. AI Relationship - DZone

Developers are often skeptical of artificial intelligence. Count me as one of them.

But the potential of AI tools -- in the right context, of course -- should outweigh our skepticism, regardless of how justified it may be. Like it or not, the path to faster releases and better products runs through AI. It's on the development community to adapt.

There's no denying the gap between coders and the wealth of AI tools on the market. According to data from Stack Overflow's 2024 Developer Survey, only 43% of devs trust the accuracy of AI tools. Nearly half (45%) say that AI tools struggle to handle complex tasks.

Some AI tools simply aren't effective. Other platforms are injecting AI into areas where it's not particularly helpful. There's a difference between doing a circus act with AI and actually using it to impact your workflow.

As a founder and CTO, I've had experience with both sides of the coin. I use AI tools such as ChatGPT to help me code, and my team has also worked hard to implement an AI assistant into our own product. Through these experiences, I've made a few observations about optimizing the impact of AI in a way that developers can still trust:

At this point, we can't trust AI on its own to write code from end to end. But that doesn't mean we can't use it to boost our efficiency. It's all about establishing expectations.

I've written a lot about AI's potential to remove some of the manual burden from developers' workloads. Traditionally cumbersome tasks such as generating diagrams can easily be accelerated with the help of AI: the AI helps create a starting point for the chart, and the dev comes over the top to add their expertise and customization.

I personally use ChatGPT to write code. Sometimes I'll ask the AI to help me generate an API, create a test suite, and generate tests for the API. It greatly reduces my usual timeline for testing code and identifying errors.

AI can help generate automated tests based on a template. Afterward, the developer should carefully review the test scripts. At this point, you can generate the actual code and run the tests on it until they go through.

This is typically most effective for writing standard, almost boilerplate code. If you give AI clear borders, it works well -- and saves plenty of time.

The dev world played a huge role in inspiring our team to integrate AI into Mermaid Chart. I remember coming across a video of a developer using ChatGPT to build his own Mermaid diagrams. The thought hit me: What if we integrated this into our product? Another win for the open-source community!

Long story short, developers should lean on each other when it comes to recommendations of tools and ways to use AI efficiently beyond the hype.

Hallucinations are likely a major source of distrust in AI. Some generative AI models produce hallucinations -- aka misleading or incorrect responses -- up to 27% of the time. This might not seem like much, but getting burned even once by a hallucination can put your guard up.

This makes it extremely important to vet any responses from AI systems. You can generate code that looks perfect, but isn't. I've caught myself saying, "That API is perfect -- how could I have missed it?", only to realize that API, in fact, does not exist.

Chatbot hallucinations are probably here to stay for a while. But the more that AI manufacturers can limit these misleading responses, the more developers can trust the outputs. The more safeguards on how we use AI, the more we can limit the problem -- such as making sure it recognizes proper test suites.

Building trust is a two-way street. Humans also need to take accountability for using AI responses in their work.

Even if quality doubles and hallucinations are cut in half, humans should always be captaining the ship. Cutting corners and taking AI responses as gospel will tarnish the actually useful and effective applications of AI in software development. We need to pay attention to how we're using AI and make sure that the creative, strategic elements of our human thinking are in control.

In many ways, the conversation around AI trust should be more about inward reflection than an outward examination of the quality of AI tools.

We've really only reached the tip of the iceberg when it comes to AI evolution. The trust conundrum isn't going away -- and AI models are only going to increase in quality.

Will we get to a point where AI can replace every knowledge worker? Or have we reached a plateau? Will we be able to launch autonomous AI agents to do work for us? At what point does this all become harmful?

That's why it'll be important for governing bodies to establish some common standards around AI usage. It would be nice to see some alignment across nations -- perhaps between the EU and the US -- to work together to establish frameworks.

There's a lot of sizzle around AI. But there's also a lot of substance. It's important to understand the difference between the two. And building comfort and trust with these systems will require some inward reflection, creativity, and initiative.

Previous articleNext article

POPULAR CATEGORY

commerce

9584

tech

10522

amusement

11518

science

5230

various

12229

healthcare

9259

sports

12188