The First Time I Didn’t Need Photoshop
One of the best aspects of writing these pieces over the last two years has been the images that accompany them. As someone without a speck of artistic talent, generative AI has let me turn wild thoughts into visuals. I prompt it to blend styles like bandhani patterns with polka dots, or mix Victorian architecture with cyberpunk vibes. The results surprise me every time. They spark a quiet thrill, seeing my imagination come to life on screen.
For this article, we tried something new. Here is the starting prompt we gave Google’s new, really cool, Nano-Banana for the image above (which has little to do with the article, unlike our usual practice): “A hummingbird dives on a sushi, causing a splash of rice, set against a surreal canvas of Holi colors and a starry night sky, in the style that contains traces of Picasso and Da Vinci” The hotch-potch created something showy, but that wasn’t the main thing.
In all our images for the articles, one small, stubborn task always remained. A final touch that sent us scurrying back to the familiar world of Photoshop: embedding our Geninnov logo. It was the one step that broke the seamless flow from idea to image, a persistent reminder of the technology's limits.
Until today.
For the first time, for this very article, we embedded our logo directly within the generative process. No Photoshop required. And it just… worked. No Photoshop. No manual fix. The logo was in, not pasted on. It was part of the world. That moment felt like a shift. Not because the image was perfect, but because the tool finally allowed me to own the entire process.
This isn’t just about skipping a software step. It’s about control. About not having to hand over your vision to someone else’s interface or rules. And that’s exactly where we’re headed — not just with images, but with everything AI touches.
The Great Blurring
Now, this isn't another eulogy for the magic of generative AI. Nor is it a clickbait-fueled prediction about the "end of Photoshop." When Google rolled out its new image editing “nano banana” last week, the predictable chorus of "Is this an Adobe killer?" began. Our answer is simple: the question is wrong.
What we are witnessing is not a one-on-one cage match, but a great blurring of a hundred different battle lines. The neat categories that defined the tech world are being erased. Google is now doing what was once the exclusive domain of Adobe. But at the same time, Adobe has rolled out PDF Spaces, a feature that directly takes on Google's own NotebookLM. It's a dizzying dance of competition and convergence.
The competitive landscape is no longer a grid; it's a kaleidoscope. Boundaries between search, shopping, design, and marketing are dissolving. The generative ability of these new models creates entirely new workflows, making old distinctions irrelevant.
The most telling move of the past few days came from Microsoft. Despite its deep, intricate, and occasionally fraught partnership with OpenAI, Microsoft announced its own proprietary LLM, MAI-1, to power parts of Copilot. Why? Why would a company with front-row seats to the most advanced AI on the planet feel the need to start from scratch?
The answer isn't just about the "instant copyability" of new features, though that's part of it. Any advantage Google gains from 'nano banana' today will likely be replicated by competitors tomorrow. The real answer is more fundamental. It’s not just about features. It’s about control. About owning the core intelligence that powers your products. And this isn’t a tech elite trend. It’s a signal. A warning.
Microsoft's move proves two things. First, it is notable that even latecomers can reach the cutting edge of this technology with surprising speed, a fact that should offer some comfort to companies like Apple. But more importantly, it signals a deeper truth: to truly innovate and build the future, you cannot rely on someone else's brain. You need your own.
The Control Imperative: One Can’t Build a Rocket on Rented Land
For any large organization, using a third-party LLM for a core product is like building a skyscraper on rented land. It's a strategic risk of the highest order. Imagine Meta building its next-generation glasses, with their entire user interface and functionality dependent on an API call to a competitor. Or consider a company like XAI trying to build humanoid robots, their every action dictated by a model they don't fully control. Whether in climate science, biotech, or even the humble productivity suite, ceding control of the core intelligence is a non-starter.
Control is the new currency, and non-tech giants may also have to join in. Few who can afford would want to rely on someone else’s model. Terms change. Privacy risks flair. Context gets lost. Multiple consultancy houses and finance companies have moved forward on this, and we expect much more of this globally.
Agents Must Breathe. Off-the-Shelf Doesn’t
If the need for proprietary models is a strong current, the need for personalized agents is a tidal wave. While off-the-shelf agents are being touted as the next big thing, their premise is built on a fundamental misunderstanding of human nature. Agents, it turns out, are a thousand times easier to build than LLMs. But more importantly, they are a thousand times more personal.
Think about how you work. The way you arrange your schedule, organize your files, or even phrase an email is unique. It’s a workflow honed by habit, preference, and the specific demands of your life. We are not uniform. The way we talk on the phone is different from person to person; why would we expect our interaction with a digital agent to be standardized? An off-the-shelf agent forces you into its workflow. A truly useful agent adapts to yours.
When the language of instruction is no longer code, but plain English, the barrier to customization vanishes. "Programming" an agent becomes as simple as having a conversation. You can tell it exactly how you want things done. This profound shift makes the idea of a one-size-fits-all "agent store" feel outdated before it has even arrived. The business of selling pre-packaged agents may not be nearly as robust as many believe, precisely because their value is diminished by their rigidity.
The End of the Assembly Line
We have spent a century learning to interact with machines on their terms. We press buttons in a specific sequence. We navigate rigid menus. We learn the mechanical, straightjacketed processes required to order a cab, buy a product, or use a piece of software. That era is ending.
The promise of AI is not just automation, but personalization at a scale we have never seen before. It is the promise that we can interact with technology in the same messy, idiosyncratic, and human way we interact with each other. An off-the-shelf solution, by its very nature, is a step backward. It’s an attempt to pour the dynamic, fluid reality of human workflows into a static, predefined box.
This is the crux of the issue. Our relationship with these tools will not be static. We will continually adjust our methods as the LLMs learn more about us and as we discover new ways to utilize them. A rigid agent can't keep up. People are trying to build agents for everything from sales to project management, but they often fail, not because the technology is bad, but because the concept is too rigid. They try to impose a new assembly line on a process that is inherently creative and personal.
The idea that a single, pre-built agent could effectively manage the complex, ever-changing needs of even a small business, let alone an individual, is a fantasy. It ignores the very reason we need agents in the first place: to bring more flexibility and intelligence to our work, not less.
Conclusion: A World of Our Own
Cryptocurrencies began with a powerful promise: disintermediation. The goal was pure peer-to-peer transactions, a world without brokers and handlers. Yet, the reality that unfolded is a study in irony. The crypto world has spawned more exchanges, platforms, and middlemen globally than possibly even the traditional equity markets it sought to sidestep. An architecture of freedom somehow built a new series of walls.
It is quite possible that the technology meant to naturalize our interaction with machines could follow a similar path. The promise of AI is a fluid, intuitive conversation between human and computer. But it could easily lead to even more straitjacketed, artificial behaviors than before, forcing us all into pre-packaged workflows disguised as "agents." Here at GenInnov, we sincerely hope that is not the case. Over the last few quarters, we have refined our internal methods and workflows—the very things the world now refers to as agents—to continually improve our processes and maintain our unique approach. We cannot imagine how adopting efficiency methods designed by someone else could improve overall, given our own unique habits and processes.
This isn't a call to arms, urging every individual to start training their own neural network. It is an observation of a powerful, undeniable trend. The age of monolithic AI is over. The future is a diverse, decentralized, and deeply personal ecosystem of intelligence. For large companies, the control imperative will drive them to build their own models. For the rest of us, the desire for agency will lead us to customize our digital assistants until they are true extensions of ourselves.
The true revolution is not that machines are learning to think. It's that they are learning to think like us—in all our varied, unique, and wonderfully inefficient ways. And to do that, we won't just need AI. We will all, in our own way, need our own.