A reminder that, while this is pretty neat and also probably offers a lot of convenient tooling for GCloud resources already built, an "agent" is simply an LLM call in a loop, each call presenting some number of available tools. If you're building your first agent, I'd recommend coding to an LLM API (probably the OpenAI Responses API, which is sort of a lingua franca of LLMs now) directly.
This is one of those cases where it's really helpful to code, at least once, at one layer of abstraction below the one that seems most natural to you.
Agree. I've first used the Responses endpoint, and besides context like questions - it made me realize I did not want to build or self host in a lot of the gaps AI agents really needed. Eg: context, security, controls, external data source connection management, interaction mapping, etc.
Having tried a few of these agent frameworks now, ADK-Python has easily been my favorite.
- It’s conceptually simple. An agent is just an object, you assign it tools that are just functions, and agents can call other agents.
- It’s "batteries included". You get a built-in code execution environment for doing math, session management, and web-server mode for debugging with a front-end.
- Optional callbacks provide clean hooks into the magic (for example, anonymizing or de-anonymizing data before and after LLM calls).
- It integrates with any model, supports MCP servers, and easy enough to hack in your existing session management system.
I'm working on a course in agent development and it's the framework I plan to teach with.
I would absolutely take this for a spin if I didn't hate Go so much :)
I have not test-driven adk-go.
But if you - like me - have not toyed around with agents until now, there is a readable, nice example in [1] which explains itself.
Python is way more ergonomic when dealing with text than go. Go's performance advantages are basically irrelevant in an AI agent, as execution time is dominated by inference time.
Go is pretty fantastic to write agents in; it has a very good and expansive standard library and a huge mess of third-party libraries. A lot of very basic things agents tend to want to do (make HTTP requests, manage SQLite databases) are very idiomatic in Go already. It's easy to express concurrency in Go, which is nice if you're running multiple context windows and don't want to serialize your whole agent on slow model calls. It's very fast and it compiles to binaries, which, depending on how you're deploying your agent, might be a big win or might not be.
Yes and I'll add that Go routines can model task queues in Go code easily - then schedule and cancel those task reliably using context cancellation and channels. All while being executed concurrently (or in parallel).
Go is the sweet spot in expressive concurrency, a compile time type system, and a strong standard library with excellent tooling as you mentioned.
My hope is that, similar to Ruby in web development, Python's mind share in LLM coding will be siphoned to Go.
100%, I don’t really get the justification for golang, today. But. Looking forward we can imagine a world of agents, agents everywhere , including embedded into systems that are built in go. So I guess it would be more suitable for that.
There are Python bindings for the framework as well.
Personally I could see Go being quite nice to use if you want to deploy something as eg a compiled serverless function.
I'm assuming the framework behaves the same way regardless of language so you could test using Python first if you want and then move over to eg Go if needed.
Why not Go ?
AI agents are not just scripts, they are the same as any other application that needs to scale. Java or Go, if application can perform better then it is always good to have an option.
In also interested in n8n. From what I gathered it’s a everything baked in app, not a lib. Meaning that unless you re doing upstream contributions you don’t actually code anything. Just manage big configs. How are you planning to use this toolkit with it?
A reminder that, while this is pretty neat and also probably offers a lot of convenient tooling for GCloud resources already built, an "agent" is simply an LLM call in a loop, each call presenting some number of available tools. If you're building your first agent, I'd recommend coding to an LLM API (probably the OpenAI Responses API, which is sort of a lingua franca of LLMs now) directly.
This is one of those cases where it's really helpful to code, at least once, at one layer of abstraction below the one that seems most natural to you.
Agree. I've first used the Responses endpoint, and besides context like questions - it made me realize I did not want to build or self host in a lot of the gaps AI agents really needed. Eg: context, security, controls, external data source connection management, interaction mapping, etc.
Remind me the another recent post: You should write an agent https://news.ycombinator.com/item?id=45840088
That's because OP wrote that
Having tried a few of these agent frameworks now, ADK-Python has easily been my favorite.
- It’s conceptually simple. An agent is just an object, you assign it tools that are just functions, and agents can call other agents.
- It’s "batteries included". You get a built-in code execution environment for doing math, session management, and web-server mode for debugging with a front-end.
- Optional callbacks provide clean hooks into the magic (for example, anonymizing or de-anonymizing data before and after LLM calls).
- It integrates with any model, supports MCP servers, and easy enough to hack in your existing session management system.
I'm working on a course in agent development and it's the framework I plan to teach with.
I would absolutely take this for a spin if I didn't hate Go so much :)
I have not test-driven adk-go. But if you - like me - have not toyed around with agents until now, there is a readable, nice example in [1] which explains itself.
[1] https://github.com/google/adk-go/tree/main/examples/web
I was surprised a native typescript style agent wasn't a core initial offering.
Been looking forward to this. I'm not up to date on my python and reviewing Claude's implementation of the python library has taught me a lot.
Gonna point Claude at our repo and see if I can do an easy conversion, makes the amount of reviews I have to do a bit more bearable.
Why doing agents with go?
Python is way more ergonomic when dealing with text than go. Go's performance advantages are basically irrelevant in an AI agent, as execution time is dominated by inference time.
Go is pretty fantastic to write agents in; it has a very good and expansive standard library and a huge mess of third-party libraries. A lot of very basic things agents tend to want to do (make HTTP requests, manage SQLite databases) are very idiomatic in Go already. It's easy to express concurrency in Go, which is nice if you're running multiple context windows and don't want to serialize your whole agent on slow model calls. It's very fast and it compiles to binaries, which, depending on how you're deploying your agent, might be a big win or might not be.
Yes and I'll add that Go routines can model task queues in Go code easily - then schedule and cancel those task reliably using context cancellation and channels. All while being executed concurrently (or in parallel).
Go is the sweet spot in expressive concurrency, a compile time type system, and a strong standard library with excellent tooling as you mentioned.
My hope is that, similar to Ruby in web development, Python's mind share in LLM coding will be siphoned to Go.
Concurrency. Unless you’re happy stopping the world on llm io… Go excels at handling network calls and the like. It’s basically what agents are.
100%, I don’t really get the justification for golang, today. But. Looking forward we can imagine a world of agents, agents everywhere , including embedded into systems that are built in go. So I guess it would be more suitable for that.
There are Python bindings for the framework as well.
Personally I could see Go being quite nice to use if you want to deploy something as eg a compiled serverless function.
I'm assuming the framework behaves the same way regardless of language so you could test using Python first if you want and then move over to eg Go if needed.
Why not Go ? AI agents are not just scripts, they are the same as any other application that needs to scale. Java or Go, if application can perform better then it is always good to have an option.
Is there anything substantively better here vs. the many other agent frameworks, or is this just the gemini specific answer to them?
This is a golang variant of the already released “agent development kit” in Java and python.
And… none of them are Gemini specific. You can use them with any model you like, including Gemini.
I’m not an expert but comparing it to langgraph, it’s more opinionated , less flexible. But, easier to get started for basic agent apps.
Worth a spin.
Thanks for posting. I am in the midst of evaluating some combination of n8n, open ai swarms, and others. This is a great addition
In also interested in n8n. From what I gathered it’s a everything baked in app, not a lib. Meaning that unless you re doing upstream contributions you don’t actually code anything. Just manage big configs. How are you planning to use this toolkit with it?