Putting GenAI to Work in Software Development
with Uli Hitzel
In this conversation, Jon Scheele and Uli Hitzel discuss the transformative impact of generative AI on the software development landscape. Uli shares his journey into AI, emphasizing the importance of understanding language and how to apply Generative AI tools in coding and productivity. They explore various tools available for software developers, the significance of team management in adopting these tools, and the role of AI throughout the system development life cycle, including testing and deployment. The discussion highlights the need for developers to adapt and learn how to effectively utilize AI tools while maintaining best practices in coding and documentation.
Takeaways
- Generative AI is set to revolutionize various industries, especially software development.
- Understanding language is crucial for leveraging AI effectively.
- Productivity tools like GitHub Copilot can significantly enhance coding efficiency.
- It's important for developers to maintain a balance between using AI tools and understanding the underlying code.
- Team management and tool integration are key to successful AI adoption in enterprises.
- AI can assist in various stages of the software development life cycle, including ideation and testing.
- Documentation remains a critical aspect of software development, often overlooked by developers.
- Security and compliance are essential considerations when deploying AI tools in production.
- Developers should focus on continuous learning and adaptation to new tools and technologies.
- The journey of integrating AI into workflows is ongoing and requires a collaborative approach.
Jon Scheele (00:00)
Okay, so it's 2024. Everyone's talking about generative AI and how it's going to change our jobs. For technical professionals, we hear about GitHub Copilot, other sorts of coding assistants, and there are quite a lot of them, but to make sense of how we can become more productive and use in the enterprise, I'm very pleased to welcome Uli Hitzel. Welcome, Uli.
Uli (00:26)
Hi Jon, thank you.
Jon Scheele (00:28)
So Uli, we've known each other for some time. You've had a very hands-on career in some quite established companies, Microsoft, Axway, more recently Dyson, and now you have a very strong focus on AI and particularly how to apply it. Do you want to share a little bit about why you decided to focus specifically on using the Generative AI tools.
Uli (01:00)
Thanks, Jon. Hey, it's so great to be here. Well, why did I decide to focus on that part? I was very fortunate. I mean, both of us, teach at NUS. And about four years, I started to get into the AI topic quite intensely also because of NUS. And at some point, I realized I need to do this full time. I cannot afford just to look at some of these AI things over the weekend.
It's definitely to me, it's not blockchain, it's not crypto. AI is going to change everything and I need to be part of this. We've got AI since about 60 years, and GenAI is just the newest bit that we've started to get interested in. But yeah, this is just one way to look at making smart interconnected technology work. yeah, I know that I need to be part of this and need to look at this whole time.
Jon Scheele (01:58)
So there are lots of different aspects to data science generally and generative AI in particular. There are people who build large language models. There are people who build the RAG interfaces into large language models. You have decided specifically not so much to design a large language model, but to figure out how to use them best in producing code, diagrams, other sorts of things. So what are the sort of key things you've learned particularly on the last, I think you really started sort of two, three years ago to look at this intensely and we've seen an explosion of different tools from GitHub Copilot, Google has a tool, AWS, what are the things that you've seen that have really caught your attention? And if I'm looking to improve my own productivity as a technology professional, where should I look first?
Uli (03:02)
Okay, well there was quite a few nuggets in there what you just said. First of all, you mentioned data scientists and language models, right? I think many people are using the word LLM multiple times a day. They obviously use ChatGPT many times. To me, one of the key parts that I keep emphasizing is we're talking about language models. The key is language. Then, excuse me for emphasizing this so much. I've worked with data scientists. I know technical people who, well, I for one, I don't really understand too much detail about what's going on inside the language model. And quite honestly, I don't think I have to, but there was many, many very technical people, data scientists who struggle with language models because they don't exactly know how, while they work internally, it's impossible to, I will say reverse engineer and to debug why certain inputs produce certain outputs.
So for a hardcore data scientist, that's actually disturbing. So I actually think that's the first part I wanted to make, the first point I wanted to make. If you're good with language, if you are able to structure information, we always talk about prompting, asking the right questions. You can actually get a lot out of language models. So you don't have to be a technical person to get superpowers with AI.
I think that's the first part. And the second part is to apply this in your context, whether you're a technical person, whether you're a writer, a legal person, a doctor, that you learn how to use a variety of tools. So it's never just one tool that solves everything for you. So you have many specialized tools. There are so many rapidly advancing number of models of tools, of platforms.
There can never be only one, that you learn how to work with them to be much more productive. I know that was a very long answer, but...
Jon Scheele (05:05)
Yeah, well, so I can see there have been huge advances in the last couple of years and proliferation of tools and large language models. But I also see a lot of people thinking, I need to learn all of these things. I need to learn exactly how the large language model works. And that seems like you never catch up trying to do that because the companies that are building the large language models, are spending a lot of money to do that.
What I'm impressed with about your approach is I'm not going to try and create my own large language model. I'm going to figure out what I can do in my job. So when it comes to coding, for example, software development, I think there's been a lot of excitement about the tools that can help accelerate the productivity of the coding part. And I've seen studies that suggest there's a 30 % productivity improvement. We can get into what are the other things that software development professionals do in a little bit, but to start with the coding part, if I want to improve my productivity with writing a Python or a Java or JavaScript program, what should I really look at starting with?
Uli (06:34)
OK, well, there is obviously multiple tools. You mentioned Co-Pilot already. Yes, there is Google Code Assist. There is Claude, which works in the browser. There is enterprise-grade tools like Tabnine, for example. There is a multiple variety out there. I think for me, one of the key parts is, and you just mentioned the most important part, increasing your productivity. It doesn't mean taking lazy shortcuts.
It's the same like somebody using chatGPT to create an essay about something they have no clue about. It's the same here. I use these tools every day, and I use a variety of tools every day. I need to figure out what works for me. And second, my technical stack and the programming languages I deal with are shell scripting, Python, APIs, connectivity. I'm not good at JavaScript.
I can use some of these tools to make impressive frontends. Sometimes they work, sometimes they don't. But I can't debug them, I can't fix them. And even with the most advanced tools, it would be a constant back and forth. So it gets especially tricky for enterprise companies, because the platforms, the tools are changing so quickly. So when you do vendor selection,
privacy and how do you integrate this. By the time you finish that process, that tool may not exist anymore. It may not be in the same shape anymore. That's where something, Tabnine is offering that stability for since, I think, they're in business since 10 years already. But as an individual, I think it depends on where you are. If you are early in your career, if you are starting to be a software developer, it's...
I want to say it's almost like back in school where you're not supposed to use a calculator because you need to learn basic algebra yourself. And when you're more advanced, then you pick up these tools. So many of these tools like Cursor, for example, that I use every day, it's just one example. They are very good for senior developers. If you're just getting started, would say look at Co-Pilot, which is what many people say is an auto-complete on steroids, look at something like Google Code Assist and start to explore how these work. Again, the danger is that you keep producing code that you don't understand. So maybe besides the actual code production, there may be other use cases where these things are interested.
Jon Scheele (09:12)
So there's a saying that in order to acquire knowledge, you need to have knowledge. So you can learn a lot from books, but if you can't read, then those books aren't accessible to you. I think that this is to your point about if you're a junior developer or you don't know a language especially well, you still need to learn elements of the language in order to validate what the tool is going to create for you. Because if it doesn't work, you have to debug it. The tools may help you to debug it, but again, you still have to understand the debugging process. There are, I guess there are different levels of developer. There's the junior one who's learned the basic structure of a language and maybe done some small projects and entering the workforce, then there's the more intermediate level, they may have been coding with a language for two or three or four years, and they're quite proficient at it, but don't necessarily know much about other languages or how things fit in the big picture. And then there are people who have been around for a decade or more, they may not necessarily know the newer languages in detail, but they've seen a lot of cycles of renewal of the technology landscape and they sort of understand that there are lots of moving parts to this and they're probably more concerned about the architecture of how these things fit together.
But to pick up on the producing the code part and validating it as an enterprise, as a manager, I'd be concerned about if I have a team of 10 people and they're all using a different tool, or even if they're using the same tool, that tool isn't necessarily producing code that is in line with my organization's style guide, standards, best practices. So what would you suggest to a team leader or manager who wants to improve the productivity of their team but still has to comply or is in their interest to have maintainable code and they want to encourage a certain type of practice?
Uli (11:45)
Okay, well, I think one of the key parts is if you have a team of developers and style guide is only one thing, right? You have things like data governance that needs to be taken care of. You have something like IP protection. What if we get sued because somebody says the code that you produce is actually part of my GitHub repository that you shouldn't have been using, vendor due diligence and those sorts of things. That's only one aspect. The other one is we all know that software developers are very opinionated people. I still use VI to this very day to code. Yes, I also use Visual Studio. Other people are using a whole bunch. They use Eclipse, whatever it is. They use PyCharm. So I think one of the key parts, as you mentioned, architecture, is to give developers a choice to use whichever tool.
And I will say the graphical user interface, the editor that they use, many of those coding assistants, whether it's Gemini, whether it's Tabnine, a few others, they actually integrate into those editors as plugins. So that means while the tool that you're using may look different, the language model, which can be a privately hosted one, it doesn't have to be one on the cloud because that's one very important aspect for enterprise companies.
can be the same for everyone. And you can obviously connect this with data protection tools and all the other integrations that you need as an enterprise. As a team, you'll have to figure out. mean, if we are only having Python developers, they all make use of the same language model. But imagine you have somebody who produces front-end things. In the mobile app development world, everything changes constantly. And you can't always just train these models. Well, you mentioned that earlier.
I don't have the budget to build my own language models, but I can obviously fine tune and train them on code. Companies may have to do this themselves. They have to start somewhere. And I think the key part is that we are not expecting a tool that does everything for everybody, but you need to build up your own practice as a team to discover which tools are available, which tools fit into the frameworks, the policies, the preferences that developers have. Something like Python or C++ are very different.
very different languages from each other. So it may not be the same model. It could be still the same platform, but you use a selection of different models based on the use case. And the other thing, and you probably didn't expect that, but I was talking about language earlier, right? I know a lot of developers who are, well, how do I say this? They are not so good at asking for help. If you really, really start from the beginning and say, I know you've all used ChatGPT.
You've all done amazing stuff with data science, but can we go back to the fundamentals of LLMs, the capabilities? Can we talk about the risks? Can we talk about the exposure of data to the outside world? Can we really give you examples on how do you ask for help? How do you, you know, and you have to sometimes when you are, when you have a question in your head and you ask your colleagues sitting next to you and you said, you know what, forget about it. I already have my answer.
When you're forced to structure your thoughts, when you're forced to articulate out a question, that may already be 50%. And then you can use AI maybe not to produce code, but to produce test data, to say there was some really old code from 20 years ago. I have no idea how to understand it. There was some code that I wrote five years ago. Can you make a markdown? Can you create a document, piece of API swagger documentation for me?
Or I'm supposed to pick up this new language. I'm so embarrassed. I wouldn't be able to tell a human, but I can tell a machine. I don't know what this code does. And you have a chat with this thing, and ultimately, you come up much better. But it's far away from this whole, you you just put in some requirements into a machine and produces code. It's really that, and that's why I think co-pilot is actually a beautiful term. So whoever came up with that brand, it's very good.
Jon Scheele (16:06)
So I think what you described is probably the second step. I mentioned in order to acquire knowledge you need to have knowledge, you need to know how to read, but you made the point, you need to know how to ask questions, you need to think about and ask for help and frame what your thinking is in order to get a result. So that's...
Uli (16:13)
Yeah.
Jon Scheele (16:31)
That's probably the second step there. What we talked about so far is about the coding part, but many people will say, well, actually software developers, even people with software development in their title only spend about 10, 20 % of their time actually coding and the rest of the time they're doing other things. there's even before they start coding, there's the architecture and the design and
Uli (16:33)
Okay.
Jon Scheele (16:58)
After they've done some coding, there's the debugging part and then the testing and which has multiple levels. There's unit testing, system testing, user acceptance testing before it goes into production. where can the, what did enterprises need to think about when they look at the tools across those different stages of the
software development life cycle.
Uli (17:25)
Okay, well, so the whole ideation part, building proof of concepts, building architectural blueprints, helping architects to make decisions. I think this has become so much part of the general knowledge of the language models that you may not even need those specialized tools for this anymore. Probably even with ChatGPT, the free version, definitely with Claude.ai in the browser.
I use these things on my phone actually, where you can, even when you walk to the bus, you can have a conversation using voice and you say, in fact, you know, I'm looking to build a distributed system and I have this headache. How do these things communicate with each other? How do I achieve the scale? And at some point you get ideas to use a pops up, to use asynchronous messaging frameworks. And so this whole ideation part that you would typically do in your head or you have meetings the next day with people.
putting this on the whiteboard. You still have to do this obviously, but I mean the sort of prep work that you can do just by yourself, just to throw ideas around and to validate certain aspects. And if you don't stop those models, they will even start to produce code which you don't want. it comes back to this whole...
Jon Scheele (18:43)
Yeah. So I found for example, that when I'm writing something, an article or something else that using chat GPT or something else can help me get over the blank page problem that I'm not sure where to start. So I'll ask some questions and it won't produce what I, a final product, but it'll give me a sort of a rough draft with some.
Uli (18:56)
Hmm. Yeah.
Jon Scheele (19:10)
with an outline and then I realized, okay, it's missing that bit and I can improve that. I found you can also ask these models, well, list the major components in a data pipeline and it'll produce some sort of answer and then you can start to become more specific and make sure, okay, it's going to be real-time event-driven, what are the alternatives to
Apache Kafka and it can start to give you some greater detail and then you start to use your own expert knowledge to drill down a little further than that. The challenge I've seen so far is not great for diagramming but you showed me some things earlier that show you can even produce a sequence diagram with the right sort of plugin.
Uli (19:59)
Yeah, flowcharts, yes. Yes, well, so I was going to say, yes, it's actually fantastic to do these exercises with, well, not with the language models, but with these, I'm always very specific. It's these things build around language models, like chat GPT, Cloud. People actually don't interact with the language model itself. It's a complicated chatbot. But anyway, the point what I wanted to make is,
Yes, it delivers you this raw material and there is a lot of, if I use a technical term, lot of garbage in there sometimes, something that absolutely doesn't make sense. you have to know, again, absolutely, it gives me 80 % and there is some things that I have to fix and that are missing. But I think that's one of the key bits. Now, it... Sorry, you just have to remind me, at the end, what was your question?
Jon Scheele (20:56)
So we're talking about the whole system development life cycle, and we started with architecture, but then...
Uli (21:03)
yes. Yes, I have it. Ultimately, even music these days, Spotify, MP3s, are digital. Diagrams that we draw in Draw.io, in diagram.net, even SVG files are structured bits. And if you can teach a language model how one is constructed, you can also teach it to say, give me a...
a sequence diagram, give me a flow diagram using this format. And you will have something that renders nicely in the browser so humans can see it, but it's also structured almost like code. So you can also create a version of this, put it in Git. And you can basically use language models to produce diagrams. I've not seen it much yet for architecture diagrams, but you know what, I'm going to try this out. I'm sure somebody has done this already.
Jon Scheele (21:50)
Hmm, okay.
Okay.
Yeah. Okay. So we go from architecture and then we already talked about the coding part. So then we get into testing. Now where, where can the tools help us with, with the testing part?
Uli (22:04)
Mm-hmm.
Some of the tools that we mentioned like CodeAssist, like Cursor, like Tabnine, they can help you write test cases in certain formats. That's one thing.
Jon Scheele (22:26)
which is always a very, I wouldn't say always, some people really love writing test cases. I can't say I'm enthusiastic. Writing one or two is okay, but writing an entire set that covers lots of different possibilities is quite, it gets repetitive and we don't,
Uli (22:32)
Hmm.
It's the same like documentation.
Jon Scheele (22:51)
like doing repetitive things as humans. if we can get a machine to do it, that's great. But to your point, well, we discussed how you need to validate. How do you validate that what the test cases the tool has produced are going to be fit for purpose?
Uli (23:13)
Well, like I said earlier, you'll still have to understand. Sorry, I just wanted to say the number one thing that developers need is documentation. And the number one thing that developers don't enjoy doing is writing documentation. So it's similar here. When we write test cases, they can be for a manual tester. And I've worked with people who do manual QA.
It's unfortunately very repetitive. You may have seen the Claude folks have just released a few days ago this thing where in fact the language model can look at the screen, click it buttons, which raises all kinds of issues. Is it going to be able to solve captures? Is it able to do bank account transactions on my behalf? Do we really want this? But if we do step by step, there is still a lot of need for manual QA testing.
These things are unfortunately repetitive, so it needs a specific character. I've worked, I've hired with these sorts of people, and they are able to look at these things step by step. And with a language model, with a proper tool that understands your code, it can produce certain test cases for manual tests. And they are repetitive. They look a bit boring, but sometimes that's something you have to do. But obviously,
there is ways to say which parts of these things can be automated. especially if they are front-end related, there may be some Selenium pieces that you can use. There may be some legal requirements why certain things cannot be tested automatically. especially, let's say we're testing APIs. At API Days a few years ago, I've already introduced a Python framework that you can...
use to do testing, semantic testing on APIs. It's not just, the endpoint available? But I'm expecting a JSON payload. If I put this in, I'm expecting an error. So you can actually script this. And if you teach this to a language model, and if you build an agent around it, you can also use this to run automated tests, happy path. For example, there's many opportunities. Yeah.
Jon Scheele (25:27)
sad path and yeah so align with the testing part is also security vulnerability scanning there are I guess there have been for some time automated tools for doing this for doing security vulnerability scanning but perhaps this space is also changing because
that security space is constantly changing. There are new threats that come up all the time. When it comes to deploying to production, what's your sort of level of faith for how you can get the AI to deploy to production? Or is that something that you always want to have someone with a finger on the button before it goes?
Uli (26:10)
you're talking about CI, CD done by AI? Well, I mean, it depends. And like I said, AI is not language models. Language models isn't AI. We have a lot of smart tools. We don't have to talk about Jenkins or so on, but just GitHub Actions are a wonderful, mostly reliable tool to do automatic deployment. of course, you can have certain steps inside those bits being taken care of by
Jon Scheele (26:13)
Yes, yeah.
Uli (26:39)
AI, that's not a problem. think generally what I've seen companies saying that we know nothing about AI and can we deploy something in production in three weeks from now. It reminds me of what we had in the early days of cloud computing. So when you look at cloud adoption, you don't put your production database of your core system into the cloud within a few months. You start...
Jon Scheele (27:04)
Mm-hmm.
Uli (27:06)
while training your AI muscles and building up that knowledge and see how much you can rely on this thing. And ultimately, I mean, it's similar to building customer facing chatbots with no AI experience versus making your existing customer service agents, which are human, more successful with AI so they can hang up the phone in five minutes instead of 30 minutes. And I would always look at these things like...
Jon Scheele (27:08)
Yeah.
So start with something non-critical, then even if it is critical, something that's not customer-facing, something that you can catch.
Uli (27:42)
Well, I think it's this.
Well, I would say it's this whole autonomous part that people always get. mean, we have incredible demos. They're very impressive, but they are not as reliable as the, everybody talks about the triple nine, the five nines and availability. We now have these alien intelligent bits that we use and I can't debug the code. So sometimes even if I engage with these things and I say, Hey, you put, I gave you this one and you produce this. doesn't make any sense. Why did you do that?
And I can't look at the code and say, yes, it's because of the switch and such. It'll say, I'm sorry, you're right, I shouldn't have done this, I won't do this again. And I can make it very restrictive, but then it's not as flexible as I want. it takes a lot of experimentation. And it's really just setting the right expectations with senior people, with business people, that it's not a switch on download and you have, you know 90 % cost savings within weeks. It's a journey and you need to build up your own team of cross-functional players who figure out how to deal with this as a company, as a team, as an organization. But most of all, you don't wait for something to be ready because things are never going to be, you you have to start now.
Jon Scheele (29:06)
Yeah. Well, thanks for sharing that, Uli. I appreciate you sharing your own journey and then the different elements across the system development life cycle where tools can help us, but where we also need to watch and be careful. And I like your analogy about building the muscles where you need to get into training and realize that it's not a sprint, it's a marathon.
So thanks very much, really, I appreciate it.
Start with the customer – find out what they want and give it to them.
Join the Opstober Challenge!
Take your DevOps and SecOps skills to the next level this October with the Opstober Challenge!
Join developers and engineers from Singapore, Indonesia, and across Southeast Asia in a month-long journey to master the latest in DevOps and SecOps best practices: automation, security, observability and more.
See more about AI

Getting Started with AI Coding Assistants
by Uli Hitzel
AI coding assistants can massively change the way developers work, offering a range of benefits from code generation to legacy codebase understanding. But with so many tools emerging and the landscape evolving at an incredible pace, how do you choose the right one for your needs? What are the key considerations for non-coders, junior developers, senior developers, and enterprises looking to use the power of AI-assisted coding? Let's explore this exciting and rapidly changing landscape.
www.linkedin.com/pulse/getting-started-ai-coding-assistants-uli-hitzel-gwa2c/

The Interconnection of AI and APIs with Aki Ranin
Conversation with Aki Ranin about his journey in the AI data science space. Aki explains how AI and APIs are intricately linked. He highlights the potential of large language models and AI agents in transforming industries and making AI-assisted tasks more efficient. He also discusses the challenges of discoverability and the importance of metadata in making information accessible to AI agents. Aki provides recommendations for individuals looking to understand the trajectory of AI and APIs.
https://www.apiconnections.io/podcast/ep-01-the-interconnection-of-ai-and-apis-with-aki-ranin

Debugging Kubernetes using AI with Nilesh Gule
Conversation with Nilesh Gule on the critical role of automation in DevOps, particularly through the use of Kubernetes. We explore the benefits and challenges of Kubernetes, emphasizing the complexities involved in debugging and managing Kubernetes clusters. Nilesh introduces K8sGPT, a generative AI tool designed to assist in diagnosing and resolving issues within Kubernetes environments.
https://www.apiconnections.io/podcast/s2e2-debugging-kubernetes-using-ai-with-nilesh-gule
powered by blue connector
API Strategy and Tech Advisory, Training and Events
We connect your organisation, your customers, partners and suppliers with the information and knowledge you need to make your tech work for you