Stop Bracing for the Worst and Start Embracing GenAI

TL;DR – Use AI. Even if you’re hesitant, give it a try. Ask it to help you come up with recipe or outfit ideas, then see how it can help you brainstorm ideas for work. Learn its limits. Learn how it can be useful to you.

The program that got me so excited about Generative AI

In December 2024, I attended a three day Wharton Executive Education program on Generative AI and Business Transformation. Before attending, I leaned toward a doom-and-gloom view: AI would ruin humanity, steal jobs, and ultimately destroy us all. After the program, I felt almost manic—completely bullish about GenAI. I couldn’t stop thinking about it. I felt I’d discovered critical information that everyone needed to know. It also struck me that this is truly a revolutionary moment in history—finally!

While I’ve calmed down a bit since December, I’m still more optimistic than most about GPT (General Purpose Technology, ironically the same initials). I believe GenAI has the potential to significantly disrupt workplaces, and by extension, society. That’s probably where the doom-and-gloom perspective finds its footing. But I also see enormous potential for good. If the right people make balanced decisions about how GenAI is integrated into organizations, we can mitigate those darker scenarios.

Before we dive into practical applications, let's demystify some key terms. The program I attended helped me better understand how the technology functioned, which helped me approach the use of GenAI with more confidence and clarity.

What is Artificial Intelligence? What is Machine Learning? What is a Large Language Model? What is anything?

Understanding and knowledge help us feel less unsure—and we can’t act on what we don’t know. So I’d like to offer a broad explanation of this complex technology in hopes of bringing some clarity to a sometimes scary (and definitely strange) landscape. If it helps, neural networks are inspired by the structure of the human brain. And guess what? We still don’t fully understand how our own brains work. At best, we can observe which parts “light up,” but we don’t know precisely what’s happening in there. It’s similar with neural networks; a lot of it is a black box once you dig deeper.

AI itself has been on humanity's mind for a long time. From Greek myths about Hephaestus's golden automatons serving the gods, to Pascal's mechanical calculator in 1642, to Alan Turing's Machine in 1936, we've been fascinated by the idea of creating thinking machines. This isn't just about automation – it reflects our persistent dream of creating intelligence beyond our own. Maybe it’s because we’re lazy, maybe we have a god-complex, or maybe it’s just part of our ongoing curiosity. Either way, AI isn’t new.

ML vs. LLM

To avoid turning this post into a complete Wikipedia entry, I’ll focus on the difference between machine learning (ML) and large language models (LLMs). First, though, let's get clear on some vocabulary. Artificial Intelligence is a very broad term for machine intelligence. Therefore, ML is a subset of AI. LLMs are a subset of AI. The other term you’ll often hear is “generative AI” (GenAI) which is associated with frontier models such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. All LLMs are considered a type of GenAI, but not all GenAI are LLMs. LLMs generate new text-based content, whereas other GenAI could generate new audio or image content. I will use LLM/GenAI interchangeably in this blog post.

Many of the modern AI tools you encounter day-to-day—like Siri, Google autocomplete, or Netflix recommendations—use ML to make predictions based on fixed data points. LLMs, on the other hand, are designed specifically to understand and generate human language, often with a bit of randomness built in (more on that later). While LLMs grasp how words relate to each other, they lack true “knowledge” or mental models. After all, an LLM is simply predicting the next word in a sentence based on pattern recognition. As was noted in the Wharton program, “in the philosophical sense, [GenAI output] is BS: speech intended to persuade without regard for truth.”

Words and Patterns

The way that GenerativeAI (GenAI) has learned human language is fairly straightforward to understand and I’m going to greatly simplify it. We’ve already covered that GenAI is an incredibly powerful word prediction tool. GenAI models are able to do this by being trained on massive datasets to understand the relationships between words. We confidently know that training datasets include everything scraped from the internet and books, but companies haven’t released training sources recently (probably to avoid lawsuits). The basic unit of text for an LLM is called a token (you may have heard the phrase “next-token prediction”), which can be a word, part of a word, or two words. Which is important to know especially if you’re trying to use GenAI for counting. This token-based approach, while powerful for language understanding, can lead to some interesting limitations. For instance, in my current job we need to track the number of times a certain topic is raised in a set of notes or reports, and we’ve consistently observed inconsistencies in the output when GenAI is asked these types of questions.

In addition to the nuances of training and base units of tokens, Generative AI models have a degree of randomness built into their processes. This randomness is not a bug, but rather a feature of how these models are designed to work. It can be thought of as a "butterfly effect" where small variations in the model's calculations can lead to significant changes in the output. Knowing that GenAI is non-deterministic is important because if you put the same exact prompt into two different GenAI models, or even the same model, there will be variable output. However, while randomness can introduce variability and unpredictability, it can also be a source of creativity. This makes GenAI a great candidate for brainstorming, but it's important to maintain oversight. Randomness means that it can be difficult to know how or why GenAI produced a specific result.

The Truth Question

I think the crux of what is on everyone’s mind: how do we know if the AI output is true or accurate? It’s a great question. How do we know if what people say is true or accurate? Trust is one variable we must consider, but I don’t think we need to allow these important questions and considerations to hinder the use. In fact, the people asking these questions are precisely those who should use GenAI, to contribute to the conversation of its strengths and limits. Think of it this way: when we conduct research on the internet, we don’t take the first thing we read as fact. We do more research. We fact check. We think critically about the subject. None of that needs to go away with the use of GenAI. If we frame GenAI as a new tool, which it is, and understand that not all tools are fit for every task, then we’re in a good starting place.

So what do we do with this information?

We use GenAI. Some people want to discount GenAI immediately because of data privacy concerns, hallucinations, biases, or inaccuracies. But they often do this without actually using or understanding the technology. And frontier GenAI models have come a long way; hallucinations and inaccuracies are becoming less common. In December 2024 when I asked ChatGPT how many r’s are in the word strawberry it said two. In February 2025, it accurately said three. And that may seem like a no-brainer to get right, but because of the unit of text being tokens, not words, their counts can be wrong. But that’s not all - newer models like GPT-4 can now reliably handle complex tasks like analyzing legal documents or writing functional code, tasks that earlier models struggled with consistently. And, when I attended the Wharton program in December, it was clear that GenAI wasn’t really “thinking” before it outputs its responses. Now we have multiple researcher GenAIs that, through the use of time-inference compute, they “think” before they output. Just as I was finishing up this post, OpenAI announced that ChatGPT search is now available to everyone, completely free, no sign-up required. This is huge; it’s a true competitor to Google (sorry, Bing). This technology is changing at a breathtaking pace and it is disruptive - which brings both opportunities and challenges. My advice: try it. Once you play around with GenAI, you’ll learn where it excels and where it underperforms, and how quickly it’ll improve.

One of the greatest pieces of advice from the program was don’t try to look for a grand, transformative way to use or implement GenAI; look for quick wins. Almost every day I ask ChatGPT’s customGPT, Sous Chef, to help me with a recipe based on ingredients I have in my kitchen. When I’m conducting research at work about a technology that I don’t have hands-on experience with, I ask questions and have a conversation to ensure I am understanding what I’m learning. I use Claude 3.5 Sonnet to help me edit my writing. Think about the quick wins in your life; the mundane, repeatable tasks that may make you too tired to carry out the more creative, meaningful aspects of your work in the same day. Brainstorming, summarizing, answering the same question via email over and over again. GenAI can help you win back time, so you can focus on the more fun aspects of your work.

By doing this, you’ll see firsthand what GenAI can and can’t do for you. Remember: you’re the human in the loop. Review the work. Edit it. Treat it like a supportive collaborator. That, in my opinion, is one of GenAI’s greatest strengths: none of us has to work in isolation anymore. We have a tireless collaborator available 24/7. People get tired. People are busy. People often don’t want to burden each other. GenAI, on the other hand, doesn’t mind—and that’s where its real power lies.

Looking back at my own journey from GenAI skeptic to enthusiast, I recognize that readers of this post might be anywhere along that spectrum. Maybe you're feeling that initial doom-and-gloom I once felt, or perhaps you're already excited about the possibilities. Wherever you are in your GenAI journey, the most important step is to start experimenting. Ultimately, GenAI is a tool. Like any tool, it has risks and limitations, but it also offers incredible possibilities. The best way to understand its potential is to dive in, experiment, and keep a critical eye. If you're sitting there thinking, "can GenAI do xyz for me?" try it!

HIPAA Security Rule: Proposed Changes Explained

The Office of Civil Rights (OCR), which sits within the U.S. Department of Health and Human Services (HHS) issued a Notice of Proposed Rulemaking (NPRM) to modify the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security Rule. That is an alphabet soup mouthful. Basically, the U.S. government is trying to update the compliance standard for electronic protected health information (ePHI) which was last updated in 2013. I think this is a welcomed proposal, since the technology and threat landscape has changed significantly in the past 12 years. For example, in 2011 only 8% of hospitals and 34% of physicians used an electronic health system. As of 2021, those numbers are now 96% and 78% respectively. This means that if you’ve been to the doctor in the last 10 years, chances are high that your health information is floating around on some health system’s network.

With so much health data now online, the stakes for protecting it have never been higher. Health systems in the United States are increasingly being targeted by ransomware because ePHI is gold on the dark web and more than half of hospitals are paying the ransom to recover their data. But that’s not the only impetus for the NPRM. Health systems are incredibly slow to implement the safeguards to protect the confidentiality, availability, and integrity of ePHI required by the HIPAA Security Rule. And, let’s be honest, the Change Healthcare ransomware attack that impacted 100 million people in February 2024 likely prompted this NPRM.

So, what’s the government proposing to fix these issues? Here are the key changes they’re suggesting.

  • No More Skipping Key Requirements AKA goodbye “addressable”: Organizations would no longer have the option to skip certain security measures or choose alternatives without clear documentation. Instead, they’d need to either implement the required safeguards or adopt a reasonable alternative and document their approach.
  • Basic Cyber Hygiene: To align with industry best practices, organizations would need to follow some minimum cybersecurity standards. These include:
    • Assigning a qualified information security official to oversee security efforts.
    • Eliminating default passwords (for the love of all that is good, please get rid of those “admin” passwords).
    • Using multi-factor authentication (MFA).
    • Keeping offline backups of critical data.
    • Installing important security patches in a timely manner.
    • Being transparent about the impact of any incidents and vulnerabilities.
  • Standard Security Programs: All organizations would be required to implement a formal security program and meet minimum security control standards to ensure a consistent level of protection.
  • Risk-Based Approach: Organizations would need to adopt a risk-based mindset when managing their security programs. This means focusing efforts where risks are highest and tailoring measures to fit their specific situation.
  • Risk Analysis with NIST and CISA Guidance: When performing risk analyses, organizations would need to follow the standards and guidance provided by experts like the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA).
  • Clearer Guidance on Compensating Controls: The updates would provide more detailed definitions and examples of alternative security measures, helping a variety of organizations find solutions that work for them.
  • AI and New Tech in Risk Assessments: Organizations would be required to factor artificial intelligence systems and other emerging technologies into their risk analysis to ensure they’re not overlooking potential vulnerabilities.
  • Consistent Cyber Incident Reporting: A baseline for reporting cybersecurity incidents would be established, making the rules clearer and aligning them with other reporting requirements for healthcare infrastructure and federal contractors.

I think these updates are way more palatable and straightforward. Especially since they scrapped that dreadful “addressable” designation. When I first read the HIPAA Security Rule, I spent at least 10 minutes trying to figure out what the heck addressable meant. So, hooray about that.

The last thing I’ll say on this NPRM is that I’m kind of dismayed by their doubling down on rural and small health care practices being held to the same standard as large health systems. Rural and small health care practices are lucky to have an IT staff member let alone a department. All health systems struggle to do more with less, and COVID has dwindled budgets and resources. Since HIPAA non-compliance basically means financial penalties, I think our government should reconsider putting rural and small practices in the same bucket as large health systems. Maybe the government could offer more support, like grants or training, to help smaller practices catch up without stretching their already limited resources. These changes aren’t perfect, but they’re a step in the right direction. With the right support, even smaller practices can rise to the challenge and safeguard the sensitive health information entrusted to them.