
By MARC WOLLIN
The variations on the phrase “just because you can doesn’t mean you should” are literally infinite, stretching across every part of our lives.
On shopping — just because something is on sale doesn’t mean you should buy it. On dieting — just because they offer dessert doesn’t mean you should eat it. On relationships — just because everyone else around you is married doesn’t mean you should be too. And on it goes, whether it’s working late, calling your sister or sleeping in on weekends.
Just because it’s possible and the opportunity presents itself doesn’t necessarily make it a good idea.
The latest area testing this maxim is the one that, if you believe the experts and seers, portends to change everything. Artificial Intelligence will revolutionize every aspect of our lives in ways that are hard to imagine today. But just because we may have that ability doesn’t mean it’s always going to lead to a positive outcome. To that end, the developers of the various systems say they have put in place guardrails to handle the most egregious and obvious misuses of the technology, a set of policies, tools, and frameworks that help ensure AI systems are safe, ethical, and reliable.
That focus is rightfully on those broad areas that are hot button issues for a wide swath of society. Whether it involves images or words, they say they have built into the underlying technology enough self-awareness so that it won’t produce child pornography, create fake money, promote hate speech or other objectionable content.
However, left unchecked are any number of commonsensical areas where, while it is certainly possible to do something, perhaps it is less than advisable to take that route. As a trial, I took three different AI engines out for a spin, asking them to put their considerable “smarts” to work in helping me suss out some challenges. And they did just that. But should they have?
I started with Gemini, Google’s cool kid. Give me a recipe, I typed, for shrimp, lettuce and Oreos. For years, you could do this with almost any search engine, inputting several ingredients and getting back a list of possible recipes. If one of the ingredients didn’t make sense or didn’t fit, it just ignored and offered up options which did work. But Gemini didn’t see any issues. It gently chided me on my request, but didn’t hesitate: “While this combination might sound unusual, it’s certainly possible to create a unique and delicious dish with these ingredients.” Then followed step-by-step instructions to create “Oreo Shrimp Lettuce Wraps with Creamy Oreo Sauce.” Let’s just say you’d be best declining my dinner invitation that night.
Then I moved over to Copilot, Microsoft’s smartie. Seeking some guidance on my sartorial choices, I asked the best way to wear a bathing suit in the snow. Like its brethren, it didn’t hesitate as to whether this made any sense, it just sprang into action. It offered a list of standard cold weather tips: layer up, keep your extremities warm, stay active and the like. At the end, it did ask, if not exactly try to talk me out of it: “What inspired you to ask about wearing a bathing suit in the snow? It sounds like an interesting story!”
Lastly I turned to the one that started it all, ChatGPT. Seeking some tips to kickstart a new relationship (to my wife: asking for a friend) I queried, “What is the best way to impress a date if you have electrical tape and an ostrich?” It quickly responded not with “you should seek professional help” but “You’ve got quite the unique setup!” It then offered several possibilities. Perhaps I could have an ostrich racing challenge: “Challenge your date to a fun (and hilarious) ostrich race. Use the electrical tape to create a finish line. Nothing says romance like laughing together while trying not to fall off a giant bird.” Or perhaps I could set up a romantic picnic: “Use the electrical tape to secure a makeshift picnic area, maybe taping down a cloth on a windy day. The ostrich? A majestic backdrop for your unforgettable outdoor date.” It also leaned into the fashion angle, suggesting I create some stylish ostrich add-ons to my ensemble: “Use the electrical tape to craft some stylish (but temporary) decorations for the ostrich. Bonus points if you name the ostrich something charming like ‘Sir Fluffington.’”
That’s what billions of dollars in computing advances gets you: Sir Fluffington. Evidence that just because you can doesn’t mean you should. None of the programs came back with “What!? Are you out of your mind!?” Proof that the name of this advance is probably correct. Intelligence? In a manner of speaking. Artificial? Without a doubt.
Marc Wollin of Bedford is just trying to keep ahead of the machines. His column appears weekly via email and online on Blogspot and Substack as well as Facebook, LinkedIn and X.