top of page
Search
GBC

Spotlight on Frank Sikernitsky





Frank Sikernitsky has been in the forefront of technology for 30 years creating the first web-based magazine. Here in this Spotlight feature Frank shares his insights on AI and thoughts about his latest book that he co- authored, “AI for Brands: Embracing the Future”. A featured speaker at the Global Brand Convergence®, join Frank and other accomplished speakers and performing artists November 29, 8 AM ET when we stream for free online.







Q. In your new book, AI for Brands: Embracing the Future, which you co-authored with Tery Spataro and Whitney Tindale, what is the premise of the book?

A. The premise is that the arrival of generative AI and large language models (like DALL-E, ChatGPT, and Google Bard, etc.) is promising to fundamentally change the economics of work and business, not to mention the relationship between business and consumers. Brands sit right at this intersection, so it's a great place to start. The book aims to provide insights and strategies for brands to navigate this new and often counterintuitive landscape, as well as avoid the pitfalls that accompany such powerful tech.


Q. What was the inspiration behind writing the book?

A. Over 2023, we watched the concept of 'AI' break out of the engineering lab and into the public consciousness. Artificial Intelligence refers to a dozen or more different technologies, many of which are mainstream, but until this past year, public perception of AI was mostly of psychotic computers in sci-fi movies. It has suddenly become real, accessible, and capable enough to change the economics of work -- and, by extension, change society. Plus, we love the technology and want to see it develop, but "rightly understood." That means we want to help enable brands to succeed using AI but also to consider the ethical and privacy concerns that come with this new-found leverage.

Q. Do you use research, case studies, or a combination of that?

A. We've done both -- much of the research was building and experimenting with the technology ourselves. I built and operated Deep Learning labs almost 10 years ago, working on huge telemetry datasets. Not long after that, I built Voice assistants. But as a group, we started to concentrate on generative AIs over the past 2-3 years. Much of the information in the book combines what we've learned testing generative AI's capabilities with what we've learned over several careers' worth of experience building and operating businesses.


Case studies on AI are still thin, although some brands jumped on early -- we talk about Coca-Cola and a few others in the book who have deployed generative AI as engagement and marketing tools.

Q. What specific trends do you see in terms of how organizations and leaders need to think about AI and its applications?

A. Business frequently talks about technology 'moving the needle,' but AI takes a sledgehammer to the whole dashboard. It brings orders-of-magnitude effects in efficiency and throughput -- but that much power really does come with enhanced responsibility. That force multiplier brings the reward of efficiency but a trio of risks: data quality, privacy, and ethical considerations. Those issues will be unavoidable at 'AI Scale' where everything is factored by 20x, 50x, or more. Errors now snowball instantly, requiring leaders to confront them comprehensively or risk spectacular failure.

Q. As someone who has been working with technology his entire career and was out there early with the first digital magazine, what are the elements that distinguish AI as a technology?

A. I was there at the very beginning of the Web, publishing the first Web magazine almost exactly thirty years ago, and this 'AI moment' feels more like that 'Web Moment' than any other since -- the very beginning of a widespread change in how we interact with information and how we reach out into the world (and have it reach for us).


In this case, I'm addressing chatbots like Bard and ChatGPT -- these AIs are the first products to communicate with people smoothly and widely on their own terms -- in extemporaneously plain language. If you step back and think about that, being able to give and get information conversationally bypasses so many layers of user interface that have developed over the decades -- and the business models that sustained them. That's incredibly disruptive and exciting at the same time.


Q. How do you use AI in your work?

A. One way I like using AI is playing devil's advocate with my original work. People usually don't enjoy dealing with criticism from others, even if it's constructive. But asking a bot doesn't carry the same anxiety. The feedback can be just as harsh -- machine won't pull any punches (unless you ask it to). You can ask it for the five worst things about your work, five missing things, or five ways to improve the structure. You can iterate so much faster this way without the uniquely human hang up of feeling judged.

Q. What are some of the major limitations of AI as it exists today?

A. One limitation (that some AIs are starting to overcome) is the idea of transparency. AI has frequently been compared to a 'black box' even by its creators. But to be truly trusted, a source of information must include enough transparency to understand where it got its data and how it arrived at its conclusions. Was the data legitimate? Was it biased?

A second limitation that's only just emerging is how to handle sensitive or delegated information. If chatbot-style AIs are so useful, they will eventually hold all sorts of sensitive and regulated data. How do you implement privacy and access permissions as 'features' within the 'black box' of the AI?

The third limitation is security. It has yet to catch up. It's very easy to get most models to act against their own guardrails, give up privileged information, or explain exactly how to do things you're supposed to be blocked from doing. Part of this is the newness of the tech -- security has frequently (and unfortunately) been an afterthought in all tech because it slows down development. This will improve but underlines just how early these days are -- it's a race to get features out quickly.


Q. If someone is just getting started and wants to understand how to use AI and has no experience, where do they begin? How might they build on that?

A. Most chatbot-style and image-generating AIs let you use them in a limited fashion for free, and being plain language, they're easy to relate to. You can visit Google Bard ChatGPT and many others and talk to them. My advice is: be bold and learn by doing. You can't break anything, and the machine won't judge you (unless you ask.)

Beyond that, there's a wealth of articles and videos on social media, especially YouTube, that range from the very basic ("How do I get the AI to do x,y, or z?") to the very technical (data science PhDs explaining how the models work mathematically). Do be careful about using personal data for now, as not every AI-enabled service has a comprehensive privacy regimen (yet). But you can do some elaborate experiments without giving up any secrets.

Be creative! Through all of this, remember that the road to this future is not yet paved. Nobody has even a small part of the answers. Keep notes and try new things: your unique experiences and viewpoints may lead you to a new and novel way to use AI in your niche. It's a powerful tool.






17 views0 comments

Recent Posts

See All

Comments


Updated Logo_Social with Reg.png
bottom of page