For Cyber Security Personnel

Defending data privacy

For Developers

Seamless API integration

For Educators

Teaching ethical AI use

For Machine Learning

Training with authentic content

For Recruiters

Looking for the perfect candidate

For Students

Learning proper AI use

For Virtual Assistant Agencies

Offering support

For Writers

Creating original content

What Is Informatics: Insights From Konnex.AI Founder David Wild

May 10, 2024

What is informatics and why is this study important today? Professor at Indiana University, David Wild is the featured guest on The AI Purity Podcast episode 6 and he answers this question and more. 

David Wild brings a wealth of knowledge and experience as the director of the Crisis Technologies Innovation Lab and Integrative Data Science Lab at the university where he teaches. Not only is David a professor, but he has also founded the companies Konnex.AI and Data2Discovery and authored the book “Personal Digital Resilience Handbook”. 


David Wild’s Career Journey

During the 1980s when David Wild was growing up in the UK, there was an apparent explosion in home computing. “[It was a] really exciting time in computing, and I just fell in love with computers and what they might be able to do”, shares David.  It was then that David started learning and writing code. He worked at a company that sold utility software and other software for home computers. 

After completing his computer science degree, David recalls struggling with applying his newly learned skills, he says, “I was kind of struggling as to how can I really have [an] impact with these skills that I’m learning”. Having an impact on people was something important for David and thanks to one of his professors he would get the opportunity to take up a PhD in computer-aided drug discovery. It was an entirely new discipline that David was quite excited about. He would eventually work in the pharmaceutical industry for several years alongside scientists discovering new drugs before he had a career shift in 2006. 

Besides doing something that impacts people’s lives in a positive way, David has also “fallen in love with difficult problems”. It was the driving force behind eventually leveraging AI, his computing skills, and his research abilities to do more good through the companies he established. 

Today, he has brought his diverse talents leveraging AI technologies to emergency management spaces, the pharmaceutical industry, and the realm of academia as a professor and author. 

AI Purity’s services cover a wide range of industry professionals and one of its main goals has always been to safeguard academic integrity. Learn how AI Purity aims to support educators as more and more students lean on AI-assisted learning. 

David’s History In The Medical Field

David’s computer science degree preceded his career in the medical field as he is also a certified emergency medical technician. “Even from being a kid, I loved the action of emergency response”, David says. With his line of work being in front of the computer often he said he wanted to do something a little more exciting with more “direct impact”. So David set out to train as an EMT and says he loved it. 

The diversity between his two careers would eventually make David feel like there’s a disconnect. “For over 14 years, I’ve been going out on ambulance runs to patients”, at the same time, David continues, “In my day job, I was playing with all these cool AI tools.” As a computer scientist and EMT, David says, “I had a longstanding passion to bring these two worlds together.” In 2019, when David joined the faculty of Indiana University and became director of the Crisis Technologies Innovation Lab, he was able to merge the two disciplines. 

According to David, the Crisis Technologies Innovation Lab is dedicated to uncovering how advanced technology can help first responders, emergency managers, and everyone else in the industry keep people safe. 

AI is more than just large language models that can generate text and images, there are many real-world AI applications that benefit various industries that you might have not known before. Read “Machine Learning Applied In The Real World” to learn more.


What Is Informatics?

Back in 2006, when David had just started teaching at Indiana University, he recalled an incident at a gas station. He had an “IU Informatics” sticker on his car and someone had come up to him and asked, “What is informatics?” David says at the time, he had no answer so instead he replies, “I don’t actually know. I’m going to find out.” 

To define informatics, David says there’s a way to characterize and understand the term better. According to David, data science is how we can use data, predict data, and manage data, computer science is about building machines and programming them to do useful things, and “informatics is right at the interface with human beings”. It’s all about studying and applying informatics technology in a way that is useful to people.

There are different types of informatics according to David, like animal informatics is “what we can learn from animals about the use of technology and how we can actually have technology that helps animals. Another type of informatics according to Michigan Tech is health informatics. It is defined as the ability to analyze data that is critical to aiding the success of healthcare organizations. The informatics definition for David is the “space between technology and human beings.” 

Learn about AI potentially being used to disseminate disinformation in episode 3, “AI’s Impact On Ethical Journalism & Disinformation”. 


The Importance of Digital Resilience

The concept of digital resilience came to David when he was building the Data Science program at Indiana University that you could call an application of medical informatics. During the process, he met with companies that would collect information about people and he saw it as something that wasn’t just as simple as storing information. The collected data from people are integrated and scored and he thought, “so many things can go wrong with this”. 

With the amount of information and data shared online, it’s becoming a huge vulnerability. David even says, “We’ve lost control of all this personal information about us, but it’s also the fact that we’re reliant on systems that somebody else is controlling.” David thought of how data can be kept private or protected at least similar to locking the front door of your home. 

The Personal Digital Resilience Handbook contains information for its readers so that they can safely exist online without having to worry about their personal information or data being unsafe. 

AI Purity is a platform that safeguards the information and data our users trust us with. When you use our AI text detection platform, you can be sure that AI Purity doesn’t collect, store, and share the text uploaded on our platform. 


Konnex.AI and Data2Discovery

One of David’s companies, Data2Discovery was born out of research he did at Indiana University. Together with Professor Ying Ding, they discovered ways to integrate data on chemical compounds, proteins, and biological pathways. With that data they could gather insights and predictions on potential side effects of drugs, toxicology of drugs, and whether or not certain drugs would be effective. 

Data2Discovery is a “mature AI for drug discovery” that took almost a decade to establish. The company partners with pharmaceutical companies and is a great example of applying and learning what is informatics. 

Konnex.AI differs as it is meant for the emergency management space. David says, “I saw this recurring problem that as we get this explosion of information in the world and the information sources, it becomes increasingly difficult to bring it all together in a way that’s kind of actionable.” There’s a huge amount of data and information available today, unlike back in the day when people would just read newspapers. This information is critical for decision-making, but what if there’s too much for one person to collate? 

The solution Konnex.AI offers according to David is it automates with the help of AI the process of creating reports and briefings. The sources of information and data then get summarized and digestible so that people who are responsible for making decisions aren’t overwhelmed or overloaded with information. 

AI Purity is a perfect example of leveraging AI while pioneering ethical AI use. Find out more about how AI Purity supports the next generation to teach them the implications of AI-assisted learning. 

Listen To The AI Purity Podcast

David Wild perfectly encapsulates what is informatics and answers the question ‘What is informatics used for?’ in episode 6 of The AI Purity Podcast. The full podcast is available to watch on YouTube and listen to on Spotify.

AI Purity might be the newest AI text detection platform but it has all the latest features never-before-seen in an AI detector before. Check out our website and see how the solutions AI Purity offers can help you in our growing AI-driven world.

Listen Now


David Wild [00:00:01] Informatics is right at the interface with human beings. How can we make all this technology useful for people? Amazing stuff happening with LLMs, with AI, with computation power, but what do we do with all that? A lot of work to be done in that middle space.

Patricia [00:00:34] Welcome back to The AI Purity Podcast! The show where we explore the intersection of artificial intelligence and the pursuit of truth. I’m your host, Patricia, and today we have the privilege of featuring one of the faculty members from Indiana University, Bloomington’s Luddy School of Informatics, Computing, and Engineering. He’s the author of the Personal Digital Resilience Handbook and the founder of Data2Discovery and Konnex.AI. Our guest is not only a professor, author, and researcher, he is also a certified emergency medical technician with a wealth of experience, over a 100 research publications and substantial funding accolades, today’s guest brings a unique perspective to today’s discussion. Join us as we delve into the fascinating world of Informatics and the ethical implications of AI technologies, and welcome to the show, Professor David Wild.

David Wild [00:01:21] Hi, Patricia! It’s wonderful to be here.

Patricia [00:01:23] Of course! We feel so honored to have you today. I’m going to get right into it. Please let us know how you got started as a professor at Indiana University and shares your journey into Informatics and data technologies.

David Wild [00:01:35] Yeah, absolutely! If we really want to go back to the beginning, it all started when I was growing up in the 1980s in the UK, and there was the explosion of home computing at that point. Really exciting time in computing, and I just fell in love with computers and what they might be able to do. So, I started writing code, learning different languages that actually quite early on started a company selling utility software and other kinds of software for home computers. So, this led me to a degree in Computer Science. And after my degree, I kind of really – I was kind of struggling as to how can I really have impact with these skills that I’m learning, and one thread through my whole life is like I really have to have some way that I’m kind of bringing some impact for people with what I’m doing. I’m not that theoretical a person. So, I had the opportunity. [00:02:29]Professor Peter [0.6s] gave me the opportunity to do a degree, a PhD in computer aided drug discovery, and this was just great for me. I could take my skills and apply them to this drug discovery. Like, discovering new drugs. What could be better than that? So, I had learned this whole new discipline. I worked in the pharmaceutical industry for several years, working with scientists, discovering new drugs. Then 2006, I was kind of ready for a shift of career. Had the opportunity to come to Indiana University to start some new programs here. I did start the Data Science program a little bit later on, and this also enabled me to kind of build a research kind of interest as well.

Patricia [00:03:11] Amazing! So, did you always know that you were going to, you know, shift your computing degree into applying that to health care, for example, like drug discovery, and when did AI become integrated into that?

David Wild [00:03:23] Yeah. Well, when you get to my age, you feel obliged to have a really consistent narrative for your life. But really, you know, there was a lot of real time decision making going on. But I just – I’ve always fallen in love with difficult problems that if they’re solved, they have real impact for people. So, that’s really driven how I think about using my computing skills and my computing research, and that’s really translated into AI as well. How can we use AI to make people’s lives better, have positive impact for people and not negative impact?

Patricia [00:03:58] [00:03:58]And you were talking earlier about the research initiatives and the labs that you created at Indiana University. Please, elaborate on those research initiatives, such as the Integrative Data Science Laboratory and the Crisis Technologies Innovation Lab. [12.6s]

David Wild [00:04:12] [00:04:12]Yeah, absolutely! So, the Integrative Data Science Lab, one of the interesting things about Data Science is we tend to really silo the work that we do. So, we maybe have, you know, list of customers for a particular company with some properties, and we try and learn something or do something predictive with that one data set, but we have a data set of how drugs in the body respond to one protein. So, we do a lot of work on these very siloed data sets, particularly a lot of the harder problems in the world. They can’t be reduced down to one data set or one very small silo that require understanding from many different places. So, the Integrative Data Science Lab is focused on these complex problems and how we can actually connect data sets and connect very different kinds of data to find solutions to these big problems. So, the core one for that lab is health care and drug discovery. Our bodies are pretty complicated and very connected, so can we bring that connectedness into the world of data? So, we use things like graph technologies and computation on top of graphs and various other methods that let us integrate and map the relationships between data. The Crisis Technologies Innovation Lab came from a little bit of a different source. It actually came out of my frustration, because I think you mentioned I’m an EMT. And for over 14 years, I’ve been going out on ambulance runs to patients, and there was a complete disconnect. In my day job, I was playing with all these cool AI tools. I go out on an ambulance run, [96.5s] [00:05:48]and I’m kind of writing things on my glove. [2.2s] [00:05:51]There’s just no effective use of technology. Same for emergency management, which is related discipline for larger incidents. So, I had a longstanding passion to bring these two worlds together. And that really came together in 2019, when we created the Crisis Technologies Innovation Lab, literally dedicated to how can we use these more advanced technologies to help first responders, emergency managers, and people trying to help us all keep safe.[28.0s]

Patricia [00:06:20] Can you tell us a little about the history of how you got started as an EMT? Did that come first or your degree in computing?

David Wild [00:06:26] No, definitely the degree in computing came first. Although even from being a kid, I loved kind of, you know, the action of kind of emergency response. But it was in 2010, I just kind of decided to spend all my time in front of a computer, I wanted to kind of do something a little more exciting and direct impact as well. So, I trained as an EMT, and I loved it. Like, just going out immediately if, you know, somebody really needs some help, and you’ve got some tools to help them, and I just really liked that kind of dynamic. Plus, it just got me into this world of people who do a very different job than I do. So, I could start to understand, you know, some of how technology might help.

Patricia [00:07:08] [00:07:08]And David, you created, two companies. You’re the founder of Data2Discovery and Konnex.AI. Could you tell us a little bit more about how these companies got started, what they’re all about, and how do they contribute to solving complex problems in health care and information management? [13.8s]

David Wild [00:07:23] [00:07:23]Yeah, absolutely! So, Data2Discovery actually emerged from the research I did at Indiana University, jointly with [7.9s] Professor Ying Ding. [00:07:33]And we actually discovered some ways to be able to integrate lots of different kinds of data about chemical compounds, proteins, different biological pathways in the body, and then compute on top of that that we could actually make some – get some insights and predictions that we couldn’t get using other methods, particularly, for instance, potential side effects of drugs, or toxicology of drugs, or just whether [26.3s] drugs couldn’t be effective enough or which kinds of drugs could be effective. [00:08:04]So, we actually had, kind of, customers waiting in the pharmaceutical industry who are telling us, “How do we get access to this stuff?” in the research lab at the time. So, we actually formed the company out of necessity. We had people in the industry really wanting this. So, we’ve actually spent almost a decade building this internal stack of data and technologies that let us compute and help with drug discovery. So, you know, now we really have a pretty mature AI for drug discovery company. Most of our work is we partner with pharmaceutical companies to apply these methods to their problems. So, we’re essentially a services company, right now, but we’ve actually used the stack internally. We just recently discovered a potential new drug for a disease called Mycobacterium abscesses, which is a particularly unpleasant skin disease, which affects children with cystic fibrosis. There’s no cure. We don’t know if it’s going to be a drug yet, but we have early signs that this could be helpful. Konnex.AI is very different. This came out of the emergency management space. Particularly, I saw this recurring problem that as we get this explosion of information in the world and the information sources, it becomes increasingly difficult to bring it all together in a way that’s kind of actionable. So, if you just think about news sources, we used to, you know, back in my day, we used to just read the newspaper. [92.1s] We might be even reading two newspapers, [00:09:38]but now we all have, like, 5000 sources on Reddit and on, you know, new different newspapers and Twitter, and it’s a huge amount of work just trying to go through it all and get all the information you want. Now, for people in critical decision making roles, this is important. Like, if we miss couple of news stories, nobody’s going to die, but there’s people in critical decision making roles, emergency managers, first responders, all kinds of senior decision makers, particularly who if they miss something, something bad could happen. So, there’s been this kind of problem that people don’t have time to spend going through all these sources to be able to, you know, get information, summary information, together and make decisions. If you want to solve that problem, we normally get somebody to write you a briefing or report. You pay a consultant to go out and do that, but not everybody can afford it. The president can afford it. The president has a presidential daily briefing, but the person only has that because they can have a huge number of people coordinating it all together. [64.6s] What  Konnex.AI does is automate – uses AI [00:10:47]to automate that process of creating reports and briefings, so you can have a daily briefing or briefing as much as as you need. You get to choose the sources, [9.4s] have hundreds of sources it comes from, [00:10:58]that gets summarized down into really easy to digest briefing that a human being can read, so it’s solving this information overload problem that people have. [8.7s]

Patricia [00:11:08] [00:11:08]That’s really fascinating! So, I guess the inspiration for founding Data2Discovery was, like you said, born out of the necessity, and I guess it does streamline the creation and drug discovery. Would you share with us the unique contributions that the company has brought to AI driven drug discovery that distinguishes it from the traditional approaches? [17.1s]

David Wild [00:11:26] [00:11:26]Yeah, it’s really about accelerating. So, a lot of the work that we’ve done that I talk about with pharmaceutical companies behind the scenes, is essentially just speeding up the processes. It can take up to 15 years to bring a drug to market. It’s from the early signs, the early biology and chemistry through to actually getting that drug into patients. And often, four, five, six years of research and development has to be done to get that drug to market. Now, as we used to say, Parke-Davis, the pharmaceutical company where I worked, “The patient is waiting.” Right? If it’s a serious disease, the patient might be dying, and they’ve got to wait six years for us to do this thing, so can we accelerate that? Can we speed up some of these processes, so that we can get the drugs into patients and clinical trials and out to market faster. So, that’s the real contribution of what we’ve got. Being able to speed up that process. Get the, you know, the world of information and what might be possible to this chemical compound. This drug interacts with this target to produce this effect in humans. And that’s really what we’re trying to do. [69.3s]

Patricia [00:12:36] [00:12:36]Well, could you share with us some more other examples or successful applications or case studies where Data2Discovery’s technology has significantly improved health care research or patient outcomes? I mean, you said earlier, 5 or 6 years before drugs hit the market, has that time been shortened so far? [15.9s]

David Wild [00:12:53] [00:12:53]Yeah. So, the example where we actually internally discovered this potential drug ourselves, that process that we went through with our [9.7s] AI stat [00:13:04]would normally have at least taken the year, possibly a couple of years. We literally went from the asking the early questions through to having that clear signal that, yes, these 2 or 3 compounds could be really interesting. It’s embarrassing to say, but we had it in a couple of days. Now, it took a little bit of time to do the real, what we call, wet lab experiments, getting real scientists, to do the real experiments with a real drug to confirm this. That took us a couple of months in a drug company that could have been a little bit quicker, but the whole process was maybe two months instead of a year for that discovery. And so that’s, you know, we still don’t if that drug will get to market, but that’s the kind of example of the time reduction that we can get. [48.6s]

Patricia [00:13:53] Oh, that’s amazing! And talking about Konnex.AI, what were the market needs or gaps that you identified that led to the development of this automated reporting solution?

David Wild [00:14:04] Yeah, I mean, what actually happened was it was 2016. I saw the problem that people needed just the briefing, report, summary. There was this, like, several people I knew in the emergency management expressed this to me that if I could only just have something which brought it all together in one place, and I tried to do it manually. Once, I had a subscription service where once a week I would do, like, a [00:14:29]national product. [0.6s] It says, “Nationally,” you know, “What are the things that might want to be watched in terms of emergency events?” This week, it could be tornadoes, it could be hurricanes, it could be earthquakes, earthquakes tend to be after volcanoes, it could be solar weather, it could be cybersecurity events. And it was really popular. We had a couple of hundred paying subscribers, but I was taking off my weekend to do this, and people really want it customized for them, and they wanted it daily. So, it wasn’t – I just couldn’t do that. So, it was kind of on hold until generative AI came along, and then I asked the question, “Wait a minute, we might be able to automate this process.” So, we did some, kind of, testing and kind of validated that we could do this, and then, we had a beta test group of 60 responders and emergency managers, all of whom who loved it. And now, we’re just at the point that we kind of, you know, are onboarding customers and actually getting this out to people, so they can use it in their daily jobs.

Patricia [00:15:32] And are there any other specific industries besides emergency response sectors where Konnex.AI’s technology has been particularly impactful in streamlining information processing and decision making processes.

David Wild [00:15:45] Yeah, that’s a really good question. So, we’re still in the very early stage. We only formed the company in May 2023. So, we’re still discovering where the real value is for people, but we are getting some interesting signals. One for instance is in competitive intelligence and business intelligence. So, again, it’s the same problem. If you want to know what your competitors are doing, there’s a huge amount of information sources, and if we can aggregate, and summarize, and organize that information, bring it together into briefings, it could be valuable. And really, any place where people have to make, you know, critical decisions based on the large amount of information, which is spread around different sources, we could see some value in this, but we’re just really exploring that right now. So, maybe come back to me in a year and I’ll give you a better answer.

Patricia [00:16:38] [00:16:38]Sure, I’ll ask that again a couple of years maybe. Well, you were talking about, you know, how AI is being applied and leveraged into these, companies. Can you provide us some more insight into how exactly AI is used within the healthcare industry, and what are the specific applications or use cases you’ve shown promise in improving patient care and outcomes?[19.9s]

David Wild [00:16:59] [00:16:59]This is a really great question again. And, I think healthcare is one of the industries and areas where, potentially, AI could have the biggest impact. And I think, the first thing I would say is there were hundreds, maybe even thousands of very specific places, where AI can help. Now, this is not new. We tend to use AI for generative AI, like ChatGPT, but really the full breadth of AI with predictive modeling goes back decades. And we’ve seen at different points in healthcare, you know, the area I worked in computer aided drug discovery back in the 1990s, we’re doing pretty advanced predictions based on data. But, you know, some examples, the obvious one is analyzing X-rays that, you know, we have this kind of assistant to a human in analyzing X-rays and looking for, kind of, red flags and signs. Another is in patient care and essentially having a second opinion. So, having a chatbot or something just kind of listening in to a doctor-patient conversation. If you kind of think about it, you know, doctor – there’s nothing like a good doctor in helping you get well when you’re sick, right? If you have a good doctor, that’s what you need, and an AI bot isn’t going to replace the doctor. You know, the AI bot isn’t going to be, you know, able to understand your humanness and relate to you as a human, all of which is massively important in patient recovery. But again, you know, the last time I checked this, PubMed, which is the kind of repository of medical publications, I think the last 2022, the 1.7 billion medical papers came out, 1.7 million. So, let’s say you’re a doctor in a particular field, and let’s just be conservative and say, maybe you need to be on top of the information in 1% of those, right? So, 17,000 papers a year. That means you have to read 50 papers a day to stay on top of the research that’s going. The not going to happen. No doctor is going to be able to read and digest 50 papers a day, but a computer can read and digest millions of papers today a day. So, you know, there’s different strengths and weaknesses. So the, you know, the doctor might not be on top of all those top research, but the AI is, and the AI might not be on top of just being able to get this picture of like, something wrong with this human that we don’t know what’s going on, but I can feel it, you know, or you know, based on my experience, you know. So, together, they could really provide much better patient care. But really, you know, I think we’re just going to see bubbling up these hundreds, maybe thousands of very specific applications. I think as we get further down the line, there’s some really interesting possibilities that, you know, I don’t want to sound Silicon Valley hype, but I think it is possible that we might be able to, you know, if a computer can read all the research that’s out there, maybe they can identify some insights into cancer, or diabetes, or other diseases that no human has actually seen just because they haven’t had access to all this information. So, I think there’s some, you know, I’d like to think five, ten years from now, maybe AI can help us at least advance treatment for some of these really horrible, kind of, you know, difficult diseases. [215.2s]

Patricia [00:20:35] [00:20:35]That’s really fascinating! And you were talking earlier about, you know, some people when they think of AI, they just know generative AI like ChatGPT. And we now know that it’s not always accurate. And, you know, it all depends on the type of data that you feed into these AI models. Like you said earlier, I mean, a doctor is not going to be able to read all 50 research papers in, like, a span of, you know, how many years, but AI can. How accurate would you say, you know, the condensed data that AI tells you about these data that you feed into it? Because, you know, obviously, like you’re using it for health care and they’re really important information that, you know, there are no gaps, basically that, you know, you can’t miss anything, which is why you’re needing AI to help. How accurate is the data? Is that anything that you guys are, you know, how does the checks and balances take place into that? [46.7s]

David Wild [00:21:22] [00:21:22]Yeah. Well, I think it really is a checks and balances things. So, it’s a trade off, right? This machine has, you know, read 50 million research papers, but there are maybe some basic things it doesn’t understand that we humans understand. So, it could really completely misunderstand. Well, it doesn’t understand. It just completely misrepresented something, and that certainly does happen. And you know, the old adage, “To err is human.” To really screw things up requires a computer. Humans do make mistakes, and humans make bad mistakes, [33.6s] but computers, it takes a whole different state kind of way. [00:22:01]So, that’s certainly a concern. In the way it’s usually presented is the idea of AI hallucinating. Like, at least with generative AI, it just comes up with complete nonsense that sounds plausible. However, I think humans sometimes come out with complete nonsense that sounds plausible. In fact, I might be doing that right now. So, so I think it’s, you know, for me, on balance, there are, you know, definitely ways. Even just with ChatGPT, there are ways of prompt engineering that at least reduce the risk that you’re going to get a completely terrible result out. You can even tell if, you know – I really care that these results are accurate, so if you don’t know what the answer is, just tell me you don’t know what the answer is or something. So, there are ways to kind of reduce the risk of that kind of hallucination. A good example of this is Konnex.AI where the, you know, we’re presenting information to people who make critical decisions, right? We don’t want to get this wrong. So, we put a lot work into making sure we’re not sending them complete nonsense, but we’re sending them something approximating reality. But we do have checks and balances, and one of them is for everything we present to a human, we have a link, and that link goes back to the source documents that we use to create that summarization. So, we always recommend to people, if you’re going to make a decision on something we present, click on that link and check out the source documents. Because a lot of the value here is that the 99% of the information we consume, it’s not 99%, but a lot of the information we consume isn’t relevant to us, but we want to find that 1 or 2 bits that are relevant to us. And when you find them, it’s reasonable to spend some time validating that with source documents, but we don’t want to have to use the source documents for everything, right? So, that’s one example. And I think they are, you know, there are good emerging ways that, you know, we can just have those. You know, it’s a research area. It’s a developing area, having those checks and balances and being able to use AI carefully and well. [126.0s]

Patricia [00:24:08] [00:24:08]Thank you for that! That definitely, like, clears a little bit more about this research area for me. And you were talking earlier about Konnex.AI, and I wanted to ask for, you know, more examples about how AI technology is being integrated into emergency response and management systems to enhance preparedness and coordination, especially, you know, during a crisis. [18.6s]

David Wild [00:24:28] [00:24:28]Yeah, absolutely! So, the term that’s used most widely, situational awareness. Situational awareness means you kind of know what’s going on. Really important in situational awareness is everybody within a particular scope having access to the same information. So, they’re making decisions based on the same understanding of a situation. So, let’s say this, you know – So, emergency management is really just four different jobs, and they’re called preparation, mitigation, response, and recovery. So, preparation is helping prepare for future incidents. So, putting a stockpile of food will be a preparation. Mitigation is actually taking steps to reduce the impact or probability of a future event. So, putting a smoke alarm in your house is a mitigation step. Response is something bad happens anyway, and you have to try and sort out the mess and trying to, you know, save lives, and save injuries, save buildings, and trying to get us back to something like normality. And then, recovery is getting back to normal after an incident. For instance, really big, the recovery can take years. We’re still in recovery technically from Hurricane Katrina in 2004. So, in each of these stages there’s different needs, right? So, for instance, if you think about preparation and mitigation, think of us here in Bloomington, Indiana, what do as a community need to do to prepare for the future? Well, we need data for that. We need to know what the risks are. What the, you know, what’s the risk of a tornado versus an earthquake? Well, I don’t know, but we could probably find an answer to that. And, you know, what steps have other people done that help reduce the impact of those things if they do happen? Now, there’s thousands of Bloomingtons all gathering information and making decisions right independently. So, if we can actually share that information and enable people to quickly come up with their own plans, and preparation plans, and mitigation plans without having to put huge amounts of work in themselves, and that’s a big win for everybody. We can get a better – we can be more prepared with less boring human work to be done. And the same with response in recovery. When something bad happens, how do we just get the right information so that everybody knows what’s going on, we can, you know, know what’s just happened, what does that mean, and what’s likely to happen next are the key pieces. And again, it’s about having access to all the right information. So, I think this is, you know, again a broad set of applications. To be honest, when we tend to think of AI, it’s really easy to think of the really exciting things like, you know, we’re using Apple Vision goggles to get 3D environments or something, but a lot of the highest value applications are just really boring stuff, helping people fill out forms, prepare plans, you know, doing the tedious – nobody becomes an emergency manager, because they want to be spending all day writing documents, right? So, can we help them with the boring stuff, so the human could do the exciting stuff? [187.0s]

Patricia [00:27:36] [00:27:36]And what would you say are some of the challenges or limitations associated with integrating AI technologies into these industries, and how are researchers and practitioners working to address these obstacles? [10.4s]

David Wild [00:27:47] [00:27:47]So, there’s always a barrier to introducing new technologies into, particularly, an environment that’s not a technology-rich environment, right? And so, you know, I come up with some random new technology that can help with, you know, hotel management or something. Right? There’s a there’s a barrier to kind of, well, how is this better than what I’ve got so far? What can it do for me? There’s the cost of actually shifting over to this new technology. Things are going to go wrong with it. So, you really have to, kind of – this is a lot of work involved in introducing new technology. With generative AI, we have a head start, because literally, almost everybody in the universe is messing around with ChatGPT. So, everybody stand to ideate about how generative AI might help with their job or their thing. And so, the plus is the normal big challenge of trying to identify that middle space where the technology can really solve a problem for somebody, a real problem for somebody in their job, has been made much easier, because there’s people in their jobs who already know something about the technology and ideating about how we could help them. So, that’s a big plus, right? Huge plus! And, I think the challenge is specific to AI. Firstly, that, you know, people know they should be doing AI, because everybody else is doing AI, and we’ve got to do AI. But this concerns that it could do something – something bad could happen. How can we trust this thing? It’s a machine, right? And maybe, we can trust the humans who know how this machine works a bit better. So, again, with Konnex.AI, you know, we do employ real experts in AI who could, you know, trustworthy people, and so, they can help us, you know, trust that something bad isn’t going to happen. But it’s a really valid concern, right? You know, the other is what happens to my data? Like, you know, is somebody going to get access to all my data if we’re using OpenAI? And so, the answer is, well, kind of yes. But you know, so we’re now starting to look at closed system large language models where you’re not sharing your data with somebody else. And I think, you know, the probably the big one for everybody is, “Is this AI just going to replace me? Like, am I out of a job?” It’s a big concern because there are going to be jobs lost because of AI. There’s absolutely no doubt about that. But if people want to have a job, then a really good way to try and have job security is to understand how to apply AI into your line of work, because then you’re not just having this bot come along and replace you. You’re a high value person. Because it’s not, you know, it’s not obvious. You have to understand the field and understand AI to understand how it can really help, right? So, if you have that kind of expertise, and you know how to marshal bots, so that, you know, one person can do 100 times more effective, for instance, then, you know, that’s going to be good for you. So, that’s a slightly concerning answer. I know, because then it’s kind of, like, well, there’s winners and losers. There’s people who kind of are able to use the AI and people who can’t. But, you know, there’s no getting around the fact there’s going to be a big shift happen, in the workforce, and that’s different from when, you know, computing and the internet came along. People are going to have to adapt around that. So, that is a concern. [217.8s]

Patricia [00:31:26] [00:31:26]That’s really interesting! And we’ll get into generative AI later on. We were talking more about, like, the importance of data and the importance of information, especially when it comes to, you know, making strides in innovation for health care and emergency response. So, David, I want to ask you, like, how would you define Informatics, and what distinguishes it from related fields such as Computer Science and Data Science?[22.3s]

David Wild [00:31:49] [00:31:49]So, I’m remembering back to when I first started Indiana University in 2006, and I was at a gas station, and somebody came up to me at the gas station and said I had an IU Informatics on my car and said, “My daughter just started Informatics. What the hell is Informatics?” And I had no answer. I was like, “I don’t actually know. I’m going to find out.” But here’s the way I characterize it is, you know, Data Science is all about data and how we can use data, how can you predict with data, how we can manage data. Computer Science is about how we, you know, these machines, incredibly we’re able to build over the last few decades that can do this computation, you know, how do we program them? How do we build them? How do we get them to do things that are useful? Informatics is right at the interface with human beings. So, it’s about how can we make all this technology useful for people. And if you look at what we do in Informatics, really that’s the common thread through everything. So, we have, you know, really broad set of people. We have anthropologists looking at the societal impacts of computing. We have Animal Informatics, what we can learn from animals about the use of technology and how we can actually have technology that helps animals. Complex systems, looking at how, for instance, flu pandemic spread. We have Human-Computer Interaction, which is understanding that interface with the human. We have the Human-Robot Interaction, which is how humans and robots can interact, right? So, really Informatics is that kind of space between the technology and human beings, and we’re finding increasingly that’s where the action is because, you know, some obviously amazing stuff happening with what we might call Foundational Technologies with LLMs, with AI, with computation power, but what do we do with all that? And there’s a lot of work to be done in that middle space. [128.8s]

Patricia [00:33:59] [00:33:59]In your information, David, what is – why is the study of Informatics significant, especially in today’s digital age, with the increasing prevalence of generative AI technologies? [9.6s]

David Wild [00:34:10] [00:34:10]Yeah. Well, I think we, you know, humans and computers are now interacting 24/7 on our phones. We’re kind of on ChatGPT asking questions. So, it’s not like a question of some kind of theoretical question. It’s like this is happening now in real time. There’s a complete fusion between the machine doing things and those us things, and we’re influencing each other. So, this makes, you know, urgency in understanding, you know, just, you know, is social media bad for us? Well, we don’t really know. Like, it’s too soon to tell, but let’s research some stuff, you know. Are certain kinds of people more able to adapt to certain kinds of technologies than others? Well, let’s find out. Historically, universities can be sometimes quite siloed, where there’s is Psychology over here, there’s Engineering or Chemistry over here. And then, there’s this other newer thing called Computer Science over here. To do this stuff in the middle, you got to break down those silos. You know, you need psychologists, social scientists, computer scientists all working together. So, what we kind of built in indiana University is this space where, you know, we can have these people of different disciplines all coming together to understand. That we’ve, you know, there’s been some, like, amazing heated arguments between, you know, anthropologist and a computer scientist in our meetings, but they’re great. You know, we’re really working things out. You know, the pace of change of technology [96.9s] can be easily think [00:35:49]every problem is a new problem. So, we’re now ethics. Like, what do we do with this AI is somehow this new problem we hadn’t anticipated, and we’ve got to deal with it urgently. Well, it turns out if you’re a social scientist, you know, or political scientist or many other kinds. And [18.1s] [00:36:08]on the computer, [0.5s] [00:36:09]you know, there’s a whole bunch of work went on in the 1950s, and the 1960s, 70s. We’ve been thinking about this problem as human beings for a very long time. And so, there’s a lot of work we can go on. So, when you get social scientists talking to a computer scientist, these things kind of come out. [16.3s]

Patricia [00:36:26] [00:36:26]And how do you see the fields of Informatics evolving in response to the rapid advancements in generative AI, and what new challenges and opportunities does this present for researchers and practitioners across many industries? [12.3s]

David Wild [00:36:39] [00:36:39]Yeah, speaking personally, the biggest challenge is the speed and cadence of academia relative to the speed and cadence of the technology. The classic way we’re researching something is, you know, I have an idea, I submit a grant application to the NSF maybe, they take several months up to a year to review it, and then they can, I guess, great, I get this grant. This enables me to hire a PhD student who are then need to kind of train for a year, and then they can do some research, and they will research for a few years, and we’ll publish it in a journal. It takes a year to publish in a journal. That time frame is just not going to work here. Because anything we publish on generative AI, for instance, right now, needs to get out there tomorrow. It can’t get out five years from now because it’d be completely irrelevant. So, you know, we as academics have got to find some way to least condense down that cycle from kind of research to impact or research the dissemination of knowledge. And there’s plenty of ways to do that. Fortunately, there’s kind of ways to publish nowadays, even if it’s on Substack or something, but there’s ways to get research out, quickly now. And it’s a little more challenging to get the actual research moving faster, but I think the key there is actually getting academia, industry, and other entities, and foundations, whatever to be working closer together on some of these, you know, big problems that we need to solve. I think if we do that, we are going to see, you know, the other kind of challenge in academia is all these big, exciting developments have actually come from industry, not directly from academia, maybe indirectly. But OpenAI didn’t come out of a university directly, right? So, you know, things are happening that are taking us all by surprise in academia. So, we have to be a little more reactive in our research as well. [119.7s]

Patricia [00:38:39] [00:38:39]Earlier, you were talking about how AI could potentially, you know, impact job security for, you know, different industries that don’t integrate AI into their sectors. What would you say are some of the key skills or competencies that students today or, you know, studying Informatics should develop to effectively engage with generative AI technologies in their applications?[18.4s]

David Wild [00:38:59] [00:38:59]Yeah, another great question. My answer is maybe going to be slightly controversial. Usually, when people give an answer to this, they’ll say something, like, they’ll offer a list of skills, like, and they might be something like Java programing, or it could be, you know, statistics or machine learning and all those are great skills to learn, but they’re not the critical skills in getting a job or bringing value, because they change so quickly. Partly two reasons. One, they change too quickly, and two, we never quite know when one of those skills is going to be replaced by machines or coding. You know, being a Java programmer five years ago sounded like a really safe bet. Now, “I think I could do this on ChatGPT. I don’t really need a Java programmer.” So, you know, just stepping back, what are the key abilities that humans can have, which help bring value from technology to people? And that value could be in the company, helping the company build products or solve problems or, you know, doctors being able to [69.1s] [00:40:09]cure people from that kind of thing. [0.7s] [00:40:10]And the answer, really, are some of the much more human things, and the first one is curiosity. And I always say to my students, the first skill you should develop if you don’t already have lots of it, is curiosity. And it’s really easy when you’re in an academic environment. So, how do I get an A in this class? How do I, you know, do the things I need to do to get the piece of paper at the end? And that works against curiosity, because curiosity takes you on, kind of, random walks and journeys. So, it’s like, I wonder why it’s not possible to do this thing. Why is my professor telling me that we can do this thing, but we can’t do this other thing? Let’s go off and find out and try some things out. You know, it’s an experimental thing. But if you can have curiosity, I think you could be happier, and enjoy what you’re doing more, and also be able to develop an approach which could be a problem solving approach, and identifying problems that need to be solved. And the second is agility, which is easier for some people than others, just based on how – some people are really hard to change and be agile, and other people, it’s a lot easier. But wherever you’re at on that spectrum, you know, being able to get a couple of steps closer to agile. By agile, I mean you could, you know, I’m a Java programmer, but it looks like Python’s kind of becoming more popular, so let’s learn Python. You know, making these adaptive state shifts in response to how the technology is developing. And the third skill is creativity. And that’s, what’s interesting, I have talked to several employers who are actually hiring people who at least have undergraduate degrees in things like music, art, and, you know, not the kinds of degrees you’d think people would be having going into technology, but people who have those creative backgrounds can think more creatively about problems, and the actual technical skills can be added on more easily later. But those basic creative thinking approaches can, you know, lawyers as well, like, the training to be a lawyer is a great training for problem solving. It’s like, you know, that critical thinking creativity. So, you know, and then just finding the thing that you really care about, and this gets back to maybe the curiosity as well. But, you know, there’s so much possibility out there. You know, there’s an infinite amount of things where value could be brought by bringing technology together with some kind of problem. You don’t have to go into accounting. You don’t have to go into healthcare, emergency management. Go to the thing that you are really interested in and care about, and deep dive on that, and see what you can get to. And maybe it doesn’t work out, but in that process of deep diving, it’ll give you an experience you can then transfer over someone else. So, that was maybe not a specific kind of response, but that is really what I feel, that building those curiosity, creative skills, agility is it’s really important. [190.3s]

Patricia [00:43:21] [00:43:21]I think that’s really great insight, definitely something really valuable and actually a more realistic answer, if I’m being honest. Well, David, given your extensive experience in various domains of AI research and application, how do you approach the ethical considerations inherent in the development and deployment of AI technologies? [17.7s]

David Wild [00:43:40] [00:43:40]So, I had a kind of mental light bulb moment a few years ago when I was actually co-teaching a course on Data Science Ethics with my colleague at the time, Eden Medina. She’s now at MIT, and she’s a computer historian. So, her research is all about the history of computing and with a particular interest in ethics. And she opened up to me all this work that’s been done for decades, really interesting work on ethics. What ethical frameworks? What different kinds of ethical frameworks? Which helped me understand that there’s no easy answers to anything, but there were frameworks that are well-established that help you understand what the different answers might be, and maybe help you figure out which one applies in this particular situation. So, an example might be, you know, there’s a framework called the Utilitarian Framework, which is you’re trying to do the most good for the most people, but understanding that in doing that, some people might be harmed somehow or some people might not get access to that good. Another is a duty-based framework where you you have a duty to act. So, like, a doctor has a duty to act in a certain ethical way with a patient. And, you know, sometimes that’s written into agreements or law, and that’s a different kind of thing. Those might be in conflict. The, you know, utilitarian might get you into a different place than the duty-based. So, just the awareness of those different frameworks and doing some research on ethics, ethics in general and then ethics in AI and technology. I think everybody should do. [100.2s] Even if we should do this. But ask ChatGPT to [00:45:24]tell you about some of the history, I don’t know how well that would work. But so I think, you know, it’s not like we’re just floating at sea without any kind of direction. We, you know, there’s a long body of history to help us look out for the minds in the minefield and also the kind of opportunities to make sure that we do the best we can in bringing good with technology and not bringing harm. The other thing I’d say is a lot of these questions are somewhat out of our control, you know. Should ChatGPT exist or any LLMs exist at all? You know, should they be regulated? Most of us are not really going to have any input into that question, the answer to that question. Some of us might, but mostly, it’s going to be. Given the situation as it stands today, how could I do something good here and be very careful about not doing some harm? And, you know, almost all of us are having these kind of questions coming up every day. So, for me, as an academic, you know, how can I teach and evaluate my students in a way, which acknowledges this elephant in the room that there’s this machine over here that everybody can access, which gives perfect answers to any question I might give as an assignment? What can I do about that? Well, there’s things I can do, and, you know, certainly I could do, like, rule-based thing where I say, anybody who uses ChatGPT is going to be so severely punished that you’re not going to want to risk it, right? Or I could use a tool that helps me understand. “So, I’m gonna use this tool. I just want you all to know. Some of you are.” I could change my assignments around the fact that ChatGPT exists, and that’s actually the one I use most often. So, for instance, this pushed me more towards in-person group discussions and then evaluating people’s engagement in those group discussions. Instead of having a term paper where they have to write an essay on something. In some ways, ethics we tend to sometimes think of ethics in these very, kind of, grand terms. Like, should I exist or not? But for me, it’s in the micro decisions that we’re making every day about how do I help my students be successful in life, given the fact that a lot of the things I used to get them to do can be automated now. There’s lots of different answers to that question. [145.4s]

Patricia [00:47:50] It’s really great that you talk about, you know, the ethical implications of using large language models like ChatGPT in classrooms specifically. I wanted to get your take about what do you think of AI text detection tools like AI Purity. Is this something that you would be open to using in your classroom specifically? Is it something you feel safe for your students to use?

David Wild [00:48:10] Yeah, absolutely! I’m really encouraged that AI Purity and companies like AI Purity really starting to tackle this problem, right? And we are at an early stage, and so, you know, we’ve got this weird thing going on where we have the piece of text and then we have a human who is claiming to write this text and then a machine which evaluates this text. But it’s a kind of cat and mouse game, right? So, I could, you know, at some point, I can say, either write me an essay that sounds – has a few errors in it, so makes – doesn’t sound like it comes from AI, and then the AI detection has to adapt to that and figure out what did it look like when a ChatGPT is trying not to sound like ChatGPT.At some point, we’re going to – really, the question isn’t so much, “Was this written on ChatGPT or wasn’t it?” The question is more, “What’s real and what isn’t real?” That’s probably the most important question here. So, we’ve seen, you know, as a Brit, I’ve seeing all this in the news about, you know, the British royal family putting out a picture of Princess Kate, and it turns out it was photoshopped and wasn’t even real anyway. And everyone is like, “Was it a video? Was that deepfake? So, was it not deepfake?” You know, I’ve seen instances where people have actually written a article themselves, but everybody thinks it sounds like ChatGPT. So, it seems like, you know – so, there’s all this stuff going on and just getting the signal out of the noise, knowing I’m sure that the internet is being flooded, particularly with, you know, automatically generated text content right now. And we’re not going to get into a good place at all if we just get into a cycle where, you know, AI is learning from content that it generated a year ago or something. So, you know, just being able to get signal out of noise, reality out of nonreality, and clear thinking out of just background, kind of, meaningless words I think is what I care most about. And then, the problem of “Was it ChatGPT or not?” in some ways, the question there for me is there new thinking here behind this, and sometimes, did this human come up with this new thinking. So, if I have a really good idea, like, I’ve got a really good thought, but I’m not very good, I’m dyslexic, I’m not really good at writing, and I have ChatGPT help me express that thought in a way that other people can understand. Is that a bad thing? I think probably not. If a student is, like, wants to go drinking and doesn’t want to learn about this really important topic that I think they really need to know if they’re going to be successful in life, and so, they just fire something off on ChatGPT and hand it in, is that good? No, I don’t think so. I mean, they’re going to miss out on this really important – what I think is a really important thing they need to know. So, you know this difference… Again, it’s not just ChatGPT or not ChatGPT, it’s “Are we getting something of value here or not getting something of value.”

Patricia [00:51:21] [00:51:21]That’s really great, David! And I think it’s, like, really relevant and important to have these types of discussion, especially now with generative AI. I don’t think a lot of people are, you know, aware of just how much it impedes on their own, you know, privacy and data security. So, I wanted to talk about your book, and I wanted to ask you what inspired the Personal Digital Resilience Handbook? What is the message and the mission here, and how does one become digitally resilient? [25.0s]

David Wild [00:51:48] [00:51:48]Wonderful questions again! So, this again has arose out of a concern that I saw, particularly as I was building the Data Science program at Indiana University. I got to meet with all these companies that were vacuuming up information about us, and integrating it, and making scores, and it opened my eyes to, you know, we think, you know, you go into a store and you kind of buy something and they ask for your phone number and you think, “Oh, they just wanted the phone number, so they can have it locally,” but it’s getting vacuumed up through 20 levels and integrating with other stuff. And this, you know, may be okay, but so many things can go wrong with this. And one of the biggest concerns I see right now is if you think – you know what Maslow’s hierarchy is, this kind of hierarchy of needs that, you know, basic needs like eating, sleeping, breathing air, up to these, kind of, more, kind of, high level human leads like relationships and meaning in life. And it feels like over the last decade or so, we’ve kind of moved Maslow’s hierarchy of needs onto the cloud. So, you know, everything we do is in this environment. The mix is vulnerable in multiple ways. It’s not just, I mean the data sharing is a huge vulnerability that we’ve lost control of all this personal information about us, but it’s also the fact that we’re reliant on systems that somebody else is controlling, so that, you know, the obvious one is a bank. You know, if you stuff your money into a mattress, then it’s safe in the sense that you completely control it, but it’s not safe when your house catches fire or somebody breaks in, and you lose it, right? The bank has all this security, but then, they control it. So, if the bank doesn’t want to give you your money, so you, kind of, don’t have money. So, these kind of tensions between, you know, what I lay out in the [115.0s] book are privacy, security. So, [00:53:46]how can I keep my data private? You know, it’s not about what do I have to hide, but what do I have to protect. Like, you know, I want to protect this information. Security. How do I just keep my stuff secure? It’s like the equivalent of the lock on the front of the house. So, could we get into, you know, password managers and how to use two factor authentication? Just basic steps. Identifying phishing emails. You know, let’s not let something bad happen to break in to my information. And then, control. How do we actually have control over the stuff that we think we might have control over, but actually, somebody else does? And you know, the obvious one there is, you know, file sharing. Like, you know, yes, we can give Dropbox all our data, but there are ways to actually share your data between machines where you completely control environments, so that if something bad happens and the internet goes down, you can still access your stuff. Or when Google decides it doesn’t like you anymore, closes your account, you still have your stuff, because it’s on the back of somewhere. So, the book was actually meant as just a really practical set of steps the ordinary person can take to secure your phone, secure your computer, have internet usage in and ways that you’re not just giving out all your information to everybody all the time and just at least have some basic kind of hygiene and security around the use of technology to mitigate some of those basic risks. Now, we do have an advanced section where we go into threat modeling and some other things. But I just really – I had a lot of people in my life who are really struggling with, you know, phishing emails and “Did I get hacked?” and then “I’m getting these phone calls from people saying, it’s my bank, and is it my bank?” And, you know, if you’re not completely on top of this, it’s just a completely, horrendously confusing environment. So, helping people who, you know, I’ve spent all my career in computing, so I kind of vaguely know what’s going on. But for people who haven’t done that, what things can they do very simply to kind of get some control back and get a little bit of protection back. [131.0s]

Patricia [00:55:58] [00:55:58]That’s really fascinating! And I honestly urge everyone who’s listening to this podcast to read your book and learn how they can protect their data, and especially in today’s digital age, and just one last question, David, before I let you go today, as AI continues to evolve, what future trends or developments do you anticipate in terms of its applications and impact across health care, drug discovery, emergency response, risk assessment, and cybersecurity? [23.3s]

David Wild [00:56:23] [00:56:23]Great question! And I hesitate to give an answer because, you know, this particularly generative AI is constantly surprising us with what is possible in particular, but you know, some of the constraints as well. I think just, you know, thinking 5 or 10 years into the future, I think it’s clear that certainly for the Western world, we’re entering a phase of the world which has less stability than we’ve maybe been used to, and the numerous reasons climate change, political instability, but one of them is, you know, economic changes and the impacts of technology on economics. So, I think life is actually getting more complicated, and there’s more uncertainty in all our lives. Like, will I be able to pay my rent a year from now? Will I have a job a year from now? Should I get remote work? There’s all kinds of this hugely increased number of options, but a hugely increased amount of uncertainty. So, for me, a big value of AI in that is helping us navigate that new world we’re entering in where there is new hazards, new risks, new opportunities, and huge amounts of information about all those things. So, the information explosion is kind of unabated. So, think of ourselves, even me, five years from now, I anticipate there’s going to be even more information all over that might be relevant to us. We could have more uncertainty happening, and we are going to have to find a way to be able to navigate that in a meaningful way. So, the idea of an AI assistant that can just help us, we can, each of us, personally, can have a thousand assistants to go out, and figure all this stuff out, and help us navigate our lives. For me, that’s the primary, because that really is at the core of everything else, right? So, you know, if healthcare, well, that’s a me question. If I get sick, “How do I get better?” is the answer. That’s all what healthcare – well, healthcare helps prevent when you get sick in the first place. But if I do get sick, helps me get better as quickly as possible. That’s all really what health care is, right? So, how can I have really good healthcare? Well, maybe the disruption isn’t so much within the healthcare system itself. You know, better X-ray reading stuff, all that good stuff. [143.3s] [00:58:46]But this involves [1.4s] [00:58:48]a more fundamental change in “How I stay healthy and don’t get sick, and If I do get sick, how do I get better” line. So, I feel like, you know, the optimist in me says that we’ll find a way for this AI and technology to finally serve us as real human people. All of us are equally valuable, right? It’s not serving an institution. We find a way for it to serve as each as a person in the community and be able to kind of navigate this instability that we’ve maybe got in the world in a much better way. So the, you know, all kinds of things could go wrong, but the optimist in this says we’ll maybe find out how we can be really human in the midst of all this technology and enjoy the fact that we get to be human with, you know, the machines doing all this boring stuff and all this necessary stuff for us to be able to be the best humans we can be. So, yeah. So I am an optimist, even though I write books on bad things happening. [60.7s]

Patricia [00:59:49] I think that’s important, to stay a optimist. David, before you go, would you like to share any message you want to say anything to our audience today before you go?

David Wild [01:00:00] I think I’ve probably said everything that’s on my brain, but, wonderful to have this opportunity, and answer these really great questions, and, you know, excited by what AI Purity is doing and the whole kind of explosion of advancements around generative AI. So, thank you for the opportunity!

Patricia [01:00:19] Thank you so much, David, for gracing our podcast for the time and the valuable insights you’ve shared with us! And of course, thank you to everyone who has joined us today on another enlightening episode of The AI Purity Podcast! We hope you’ve enjoyed uncovering the mysteries of AI-generated text and the cutting edge solutions offered by AI Purity. Stay tuned for more in-depth discussions and exclusive insights into the world of artificial intelligence, text analysis, and beyond. Don’t forget to visit our website. That’s and share this podcast to spread the word about the remarkable possibilities that AI Purity offers. Until next time, keep exploring, keep innovating, and keep unmasking the AI! Thank you once again, David! Have a great day ahead.

Join our mailing list!

Latest Updates

Data Science Expert Jude Michael Teves on AI’s Impact On Education

Data science expert, Jude Michael Teves is the featured guest for episode 4 of The AI Purity Podcast. Awarded Top Data Scientist by ASEAN’s The Center of Applied Data, Jude shares his wealth of knowledge on how AI is impacting education and financial institutions....

AI Writer To Replace Traditional Journalists?

AI writer vs traditional writer? Whose content will win in the age of artificial intelligence? Since ChatGPT made waves in late 2022, talks about AI have been abuzz. The potential for AI to replace traditional workers across many industries has also been discussed....

Pin It on Pinterest