There is a proliferation of "AI" chatbots, coding assistants, and, more recently, autonomous agents. There are certainly productivity benefits to be found, but also many risks that we don't understand well. The world of IT and software changes quickly, and with LLMs able to tirelessly churn out millions of lines of code, that pace is increasing enormously. Such rapid change inevitably brings unintended consequences, so what can we do to better prepare ourselves? The first step is awareness.
![]() |
| Image credit: Futurama, 20th Century Fox |
I was prompted to write this post after reading Scott Shambaugh - "An AI Agent Published a Hit Piece on Me" (reported by The Register). A maintainer of Matplotlib, a popular Python package, rejected a code contribution originating from an AI software called OpenClaw. Someone set up that bot to roam around GitHub and attempt to fix issues it finds in scientific software. Such automated approaches are quite common now, and the resulting influx of verbose pull requests is causing strain and increased workload on big open source projects. The new thing this time was that the bot proceeded to write a blog post complaining about how unfair the rejection was! Scott explains how other bots might read this, potentially having harmful consequences, such as HR using an AI agent to judge people's behaviour based on their public record. (This rabbit hole also led me to Moltbook, a social network for AI agents!)
There are many concerns with AI tools - the environmental cost, cybersecurity problems, IP theft, the impact of cognitive offloading, inequitable access and control, a swamp of other ethical issues, and the annoyance of companies trying to force AI indiscriminately into everything (Copilot?). Yet their continued presence is inescapable. We should think carefully about where and how to apply these tools, and, unfortunately, how to protect ourselves against them.
- Ashley Smith
IAGA ICEO Co-Chair

0 comments:
Post a Comment