Curiosity Crisis Podcast

Ep 4 | AI & What You Need to Know

CuriosityCrisis Episode 4

AI is everywhere. It seems like the biggest buzzword of 2023. Most of the world's working professionals have probably been exposed to AI in one form or the other, and many may be unsure where to start. 

We get Luke to leverage his computer science degree to explain artificial intelligence in layman's terms, including the basics, AI History, Use Cases and a How-to-start guide. 



Khush:

Welcome to the curiosity crisis, we challenge ourselves to explore the world of business tech, investing in science, get curious and be part of the journey as we discussed, challenge and learn. So today we're talking about AI. And I'm going to pick Luke's brain about it. Luke studied computer science and software engineering, so has a bit of background and the topic and absolutely loves all things tech. So I'm very keen to get into it. Lucky. How are you mate?

Luke:

I'm very well, thanks. It should be a very fun episode. It's obviously a hot topic, and maybe even a bit of a buzzword. But that should be a good episode, I think.

Khush:

Well, let's kick into it. Right? What's what's keeping you curious?

Luke:

Yeah, um, I've been learning about convolutions and how a convolution works differently in different fields. How you know, Mathematics, Computer Science. Convolution isn't exactly the same. It's often used in computer vision in computer science. But then in terms of like mathematics, it's like a fancy way of multiplying, essentially. Anyway, so that's been keeping me curious. What about you? What's been keeping you curious?

Khush:

bit different? Nepal has been keeping me curious, actually. Okay. I've got a trip book there. I'm actually heading pretty soon going trekking above, you know, 4500 5000 meters, and existing up there for hopefully, a few weeks. So yeah, just getting ready for that, like, saucing the gear and making sure we're ready to go. So very excited. That's

Luke:

good. Yeah. Should be good. Should be good trip.

Khush:

Well, let's get into it. First things first, in layman's terms. Can you please tell me what is artificial intelligence?

Luke:

Yeah, I mean, look, I think it's always a good thing to try and to find something. And I think there's various definitions, but a very simple way. And I think a good way of thinking about it is it's the attempt to replicate thinking or like a human brain or something, the ability to analyze things. And yeah, it's essentially it's replicating thinking in a computer. So you know, the the human brain, we have, like synapses. So you could look at that as like nodes or something in like a neural network. That's just an example. So but yeah, essentially, you're trying to replicate that. So it's artificial thinking. That's, that's how I like to think of

Khush:

it. Okay. Very, very interesting. I think we'll, I'll pick your brain about, you know, if AI has a brain a little bit later, yeah. But in the first instance, obviously, you know, AI right now is a big, big word. Very, very topical, and you hear it in every every company presentation, pretty much everyone's talking about it. But it's been around for a long time. I understand. Can you give us a brief history into it?

Luke:

Yeah, I can't. So AI has been around for a long, long time, it's actually been around almost as long as the computer has. So Alan Turing, some people say he was the kind of the father of computing, he'd already theorized about AI. So in early in 1945, Alan Turing predicted that one day a computer would be able to beat a human at chess. And about 50 years later, they were able to create an AI that could be a Garry Kasparov. And if no one knows who that is. He was a Russian chess player, and was very, very, very good at it. Well, he still is actually, yeah, and so basically, AI has been around forever. There used to be a lot of competitions as well for doing essentially like a binary decision on you give a AI a picture, and it tells you what it is isn't an apple, isn't it orange. And then it wasn't really until like 2017, that Google published a paper regarding a new type of neural network called transformer model. And obviously, as we know, a few years later, Chuck GPT came out, which is using that transformer model. So that's kind of been the big breakthrough came in 2017. And it's taken a while for that to mature a bit and for products to come out. That's why it's so relevant at the moment.

Khush:

Yeah, okay. Okay. Yeah. So now, what could be more sort of common world uses is sort of coming through in AI. And I'll probably ask you about, like the house complex, and how simple it can be in a bit. But one thing that's really on my mind again, pretty like, I'd say, I'm a, an AI newbie, a bit of a punter in the space, and not really not that much. But does aI think like, is it conscious? Does it have a mind? And if it does, do we really have control of it? So like, you know, yeah, how does? How does it operate?

Luke:

Yeah, I think it's an interesting question. And I like to look back at the question, say, Okay, well, what is thinking, you know, thinking stems from the notion of intelligence itself. And, you know, if I asked you to define intelligence, that there'd be various different definitions that you'd probably be happy with. Einstein defined intelligence as not knowledge, but imagination or creativity, which I think is an interesting definition. And so, you know, one thing that's been discussed a lot in AI is the amazing creativity that they have been able to produce in terms of artworks and things. And so can it think is a very broad question because AI there's so there's so many different artificial intelligences, right? But I would say either there are there are little nuggets of examples where you could say it might be able to think but I would say, in general, especially the artificial intelligence that most of us have the chance to interact with, I'd say No is the answer. It's more of a filtering system on an enormous amount of data that it feels like it can think because of how much information it's been able to, to essentially look through.

Khush:

Fair enough. Okay, so if it can't think then, is it conscious? Do we have to like so? Where I'm where I'm taking this is like, you know, some of these movies like AI could be seen as like, cyborgs or whatever. Yeah, that sounds kind of stupid. Right. But like, yeah, that's the perception in some places out there. So like, is it conscious? And do we have control over it? Like, if it is just based on a data set? We're feeding it and training it? Yeah. Do we have control over?

Luke:

Yes, I think we definitely do. It's not something that's going to break free and do its own thing. And the reason we can get to that conclusion is the human brain is a very complex thing. And it's something that we're trying to replicate essentially, with AI is. And what we're mainly trying to achieve one big thing is AGI which is artificial general intelligence, and we can get into that later. But the main point is that a generalized intelligence means that it can do many things. And we haven't we're not even Well, I don't want to say we're not close. There's a lot of theories on whether we're close or not to that. But the main thing is that the AIS that we interact with daily, are not that they're very good at one specific thing. So whether that's finding patterns in text, or whether that's converting text to speech, or whether that's, you know, correcting your messages, or whatever it is, they're very good at one thing. So it's not really something that can break out and suddenly take over the internet. That's not really the the AI that we're we even have a capability of making right now.

Khush:

Right? So it's more Yeah, okay. So the AI is that we sort of think even though, you know, you talk to you work with Chechi T, or Google bot or whatever, and, you know, you're getting some pretty sensational stuff back when you're just feeding it stuff, like a normal conversation. So maybe that gives the the impression that it can think and it's super intelligent. But again, it's a language model, it's trained on one thing, and just like other AIS out there are trained on one or two things. It's not the human brain that can do so much. Yeah,

Luke:

that they are really, really good at giving convincing responses. That doesn't mean that they can think. And having said that, just because that's where we are currently doesn't mean that it's not something that we should think about, or something that we should try and try and improve upon, you know, AI safeguards are a real thing. And it's something that has a lot of work put into it. And I think it's really important. But yeah, that doesn't mean that we're in danger right now. I don't think

Khush:

okay, so quick one, then to build AI engineer write code? Yes. Okay, cool. Thank you. Even even that sort of was, you know, I sometimes don't know what goes into AI, but that helps. So then who builds and owns AI? So firstly, on the creative side, just like, who does what type of professionals build AI? Is it all software? So computer scientists and software engineers? Or are there other professionals involved?

Luke:

Yeah, it's good question. So yes, is the short answer. It's computer scientists, software engineers, computer engineers, mathematicians, data engineers, but the study is kind of opened up more broadly now, as well. It's a very complicated field. And like I was saying, you know, essentially, we're trying to replicate the brain. So there's a lot more influential aspects. And so we're looking at like biologists, neuroscientists, physicists, there's, there's a lot more fields contributing to AI now, it's not just, you know, computer scientists, if you looked at teams who are working on, you know, amazing AI, they have very diverse backgrounds. A lot of the people.

Khush:

Yeah, right. Okay, cool. So and that was really interesting when you just mentioned the, the neuroscientists, and so sorry to link them together, you're saying, you know, AI is trying to replicate how we think, obviously, replicate how we think we have to understand how the brain works. So yeah, really, really interesting. So then do the company's own AI, for example, there's there's open AI own, you know, because its own AI because it owns Chachi. T does Google own AI because of Baidu. Yeah. All that sort of stuff. Yeah. So

Luke:

they own so that's essentially they have code that's running on servers. And that's, that's their ownerships, they own that code. They own those that hardware. That doesn't mean that you can't have AI that isn't owned by anyone. So for example, you can open source things. So meta used to be Facebook, they have open source one of their large language models, LLM called llama. So it's something similar to charge GPT. And, yes, they own the initial code, but it's open source, which means anyone's free to use it, and it's publicly available. So you can look at what it is that they actually have. Yeah,

Khush:

yeah. Okay. Okay. Then do you think it whether it's open source, or it's fully, you know, owned by a company like Microsoft or Google or whatever will we ever have to turn off AI? Like, will we have to stop one of them?

Luke:

Yeah, I mean, look, I think the answer is likely yes. You know, if you have an AI one day with an enormous amount of capability, there's probably going to be some unintended use. That doesn't necessarily mean we're looking at, you know, a dystopian world or it doesn't mean that we're looking at a post apocalyptic world or anything like that. I think, in general terms, AI is, you know, it's a forefront of technology. And there's probably going to be some form of either misuse or unintended use of it, where it's able to do something that you weren't planning on. And I think that, yes, and and to answer the other question, which probably stems from that is, can we turn it off? Also? Yes, it's code that's running on hardware, you can turn it off,

Khush:

mate, you pick my brain and good. You're understanding the level I'm coming from. And I'm hoping a lot of listeners sort of appreciate just to build up from that ground level. While we're on this topic, I'm just going to quickly ask, so then then begs the question, you know, unintended consequences of this and that will or can, or is Chad GBD going to take my job as a general statement this year? Don't forget to DPL. And I know this is a big topic. But yeah,

Luke:

yeah, I think I can have a very, very long winded answer to this. And look, there's there's definitely there's definitely different opinions on this. But mine is the answer is no. Okay. I don't think it's taking your job. Having said that, that doesn't mean that there aren't areas that it could seriously threaten jobs, you know, the fact is that we have a new way of leveraging efficiency. And so if you're someone who's maybe writing content, so whether that's marketing or journalism or something, or even, to some extent, creating content like images, there's a there's a good chance that that might be threatened by it. But But I definitely think that, at the moment artificial intelligence when we're not somewhere where it can critically think, and it can't take in all of the factors. And so I think in most cases, no, it's probably not going to take your job. But that doesn't mean that it won't be a very useful tool, or it doesn't mean that, you know, part of the industry might move towards using AI for efficiency reasons.

Khush:

Yeah, it's so crazy, I couldn't believe some of the creative stuff that was coming out of that has been coming out of AI, like some of the images and the paintings and illustrations and stuff like that is so wild, even like I've had some AI music. And it's definitely not what I expected, you know, when I was first hearing about AI, but it's,

Luke:

it's funny how that works, actually, because we initially thought with artificial intelligence, that the last things to go would be creative jobs, because we thought, well, AI cannot replicate the human creativity. That's impossible, right. But it depends on your definition of creativity. Because if you give it enough, you know, data to be trained on, it can find little, little patterns, or little glimpses of creativity in trying to create something that you've asked for. And so what seems to be creative is really leveraging off, you know, artists creativity, and obviously, there's the moral debates around that their ethics side of it, but I think it's really interesting that it can produce amazing, amazing content. But some things that it can't do is, you know, it can't quickly think about a lot of things. So for example, you know, how will you replace theoretical physics, though, it's going to be a pretty difficult task, but it doesn't mean that you can't give it parameters and it could help in that area. So it's, it's interesting.

Khush:

Dillard testing a lot. Yeah. Last thing, I just find it so wild, like, you know, I mean, yes, it can produce some pretty awesome stuff. But is it creative? Well, not necessarily. Because based on all the data and all the inputs that's been fed, but for example, like Van Gogh or any of these fancy artists, I'm not not massive on art, it was crazy creative, because it's probably like absolutely revolutionary, a new for its time, which, yeah, so it's based on the inputs anyway, let's get into a bit about like AI and whether it's going to be effective or not. So I want to ask you, how do you know whether an AI application is going to be effective? So for example, I mean, you know, in all sorts of company reports nowadays, you know, that they're saying, Oh, we're an AI company, or AI enabled company, or, you know, investor presentations, and they're sort of spruiking it, but how much of that is nonsense, like, how do you see the wood from the trees when you're looking at into our world? And what's actually useful?

Luke:

Yeah, I think it's actually a bit of a hard question. I mean, there's, there's a lot of things will look AI is currently a hype train, it's currently a buzzword. And so, especially if you can tack it on to a pre existing company or product, you can market it as that and it can help generate excitement. It might even help boost your valuation as a business. And so from a marketing perspective, it makes a lot of sense. And so to try and wait it out in a lot of circumstances, So I think that, like, one good thing is to look at, okay. Has this been added to an existing product or service? Or is AI been built around it from the core? So for example, church GPT? No, that's not hype. It's a very useful tool. Why? Because all it is is a large language model. But if you look at another company that saying we have AI, speech detection or something, or AI, maybe AutoCorrect, you know, is that just something that they've tacked on? Or is that something that they've really built the company around? And that I think gives an indication of how much value it has? And if it's just part of the hype train? Or if it's actually, yeah, something very, very useful?

Khush:

Yeah. Okay. Very, very interesting. And I could draw a lot of parallels to that to other industries, for example, you know, battery storage, electric vehicles, all that sort of stuff. But I'll hold off for now, because I definitely want to get deeper into it. So in that case, then how simple can AI be like, obviously, like you mentioned, it can be built purpose built from the ground up, or it can be added to existing services? And it seems like there's there's use cases in both, as long as it's making things more efficient and effective. Yeah, how simple can it be?

Luke:

Very, very simple. Artificial Intelligence encompasses so much. And so if you add some form of artificial intelligence to an existing service, that counts, you know, you have AI, it doesn't mean that it's adding a lot of value, but it's there, you've done it. And so yeah, the answer is really simple. I mean, a classic example is, like, some form of binary decision where you have, you know, is this an apple? Or is this an orange? Yes, it's computer vision. But it's quite a simple, you know, scenario, you give it an image, and it tells you if it's one or the other? Or is this an apple? Or is it not an apple? So it can actually be very simple, but I think in most, you very useful implementations are a bit more complex.

Khush:

Right, right. Okay. So building from that, then can you give me some examples of your favorite AI? is making the world a better place and more productive, more efficient? Like, where do you see? And have you seen AI really adding value?

Luke:

Yeah, that's, that's a good question. I mean, I mentioned before about meta how that open sourced llama too. And I think that's a really interesting space to watch. Because the main thing that's interesting about that was that they gave a commercial license to that open source, which means that me or you, or anyone else can create a product and make money from it using that language model. And I think that's huge. Because, you know, for example, open AI, they have the secret sauce, you know, that's, that's theirs. But this is this is giving it to anyone. And I think that there could be a lot of really interesting, yeah, industries that are positively affected by it. And I think that in general look for things that are built on llama to another another interesting one is pi AI, which is an AI that is essentially like a personal AI, something that's relatable. And, yeah, it's basically they've, they've gotten a chatbot. And they've got text to speech. And they've combined them to make like a talking AI that you can interact with. And they're trying to do some really interesting stuff around mental health. And I think that that's awesome. And I think that that's a space that could definitely grow. And then a big one, which I haven't talked about, which isn't open AI, funnily enough, it's actually DeepMind, which is the team behind AlphaGo has a fantastic documentary on YouTube about it. But essentially, they were the first team to build an AI that could beat a human at NGO. So if you don't know what NGO is, it's essentially an extremely complicated board game with much like an exorbitant amount of moves so much more than than chess like possible moves. So it's a really hard thing to compute. And they were able to beat a human with an AI. And the bigger thing, though, was that they released something called Alpha fold, which is the most accurate predictive mapping of folded proteins on Earth. And if you don't know what that is, or why that's useful, essentially, it'll massively affect chemistry and biology. And this came out, I think, in 2020, maybe 2021. And we're starting to reap some of the rewards of them just opening up this data set. So I think that's awesome. Keep keep your eye on DeepMind as well. Yeah, that's

Khush:

so crazy. I love a lot of those examples. I think the last one, like, you know, using healthcare, like, sure, if an AI is beating humans in chess, that's cool. But like if you use it to better health care outcomes, yeah. So that's, that's pretty awesome. Have you got an example of top your head of how AI has changed the old way of doing things? Maybe it's just even Chechi Beatty. Like how we've stopped we've now just assumed AI in our day to day lives, and it's made an old process better.

Luke:

Yeah, okay. I think in my opinion, we're still in that phase of, of moving away from the old ways. So it's still where AI is really assisting in a lot of ways. So I don't know whether there's there's that much that I would say like the old way is now better, but it's rather assisting and so many of them old ways, and eventually that might change. But yeah, I think some big ones is. Yeah, like content creation, I would say. So that's images or whether that's that's text, even if it's just coming up with the idea or something to be based off, I think, yeah, that's, that's come a long way. And I think in a lot of circumstances, the first thing a lot of people would do now is asking AI, and then try and build off it or make it better or something like that.

Khush:

May, I actually actually realized that one of the best bits of advice that I got given is, I think I was doing writing some papers back in the day, and, you know, like getting grilled on what I was writing, and you sort of get disheartened a little bit, but you always got to remember that it's much, much easier to review and critique something than it is to create it from scratch, right. But then, just like you said, before, with with content creation, at least, before you get something critiqued or whatever, you can, you know, test it with an AI, and even potentially give it your ideas and get something made, you can critique but then also doing your own words and compare the pair before you send something off. So maybe that initial level of quality is higher because of it. So yeah,

Luke:

I think the hardest place to start is always with an empty page. So sure, you can fill it with something, it definitely helps to then to Yeah, that'd be your starting point. Or even as you were saying, like, for grammatical correctness, or, or improving a written piece, I mean, that's, it's very useful.

Khush:

They awesome. And I'm going to use your wise words, they're starting from an empty, empty page, a lot of people, although our listeners might be thinking, you know, AI, it could be really good, but I have no idea about it. And I have no idea how to use it, how to be effective with it and where to start. So what suggestions would you give a typical non tech guy or gal that wants to explore AI and try to use it effectively to help their work or projects or whatever?

Luke:

Yeah, I think, look, you don't have to understand it, you don't have to understand how it works. To leverage that and get a competitive advantage, like, we're talking about an extremely complicated field, and you don't have to do that all you have to do is use it, I think that's the biggest thing is using, the worst thing you could do is be afraid of it feel like it's going to take a job, and then not touch it. And you haven't been able to gather any of the tools and or focus on on any of the value that it actually can provide. So in computer science, there's something called it's a bit of a joke in computer science, or just prompt engineering. It's, we joke about it, but it is actually legitimate. There are, you know, some actual techniques to getting more value out of an AI. But something to keep in mind is that it's a really young field, like there's a lot that hasn't been uncovered yet. And so you using these tools, you might be able to find some really interesting things that work really well. And that no one else knows. So be curious. You know, try and eliminate genuine problems that you have with AI. And I think that you'll learn a lot and have a learning mindset and ask for forgiveness, not permission if you have to, and just go for it. And I think you'll you'll get a lot out of it. It'll be very beneficial. Yeah,

Khush:

mate, that's so awesome. And I think, you know, it's funny, like, sometimes you can feel like you're just playing with it and wasting time or whatever. Like, I know, me and me, and some of the boys have just made silly poems or like, send each other photos that we've generated on some of those visual API's that are that are pretty funny. But in a sense, you're sort of training itself, right? Yeah, if you want to create like a funny monkey photo or something, you have a set of prompts. And that will train yourself to, to get something out of these API's.

Luke:

I think it's important to be conscious, though, as well, like, you know, at the very least, if you're just making funny pictures of your mates or funny poems, note down to yourself, like what worked the best, what wasn't that gave me the best result. Because that actually might affect other things. So yeah, I think definitely be conscious about what is working. What isn't?

Khush:

Yeah. Okay. No love that. That's really, really handy. Now, I carry on that theme. I think we're just about at a time. If a listener can get only one thing out of this discussion about, it's been awesome. And we might have to do another episode and go a bit deeper on the tour one. But what is what's the key takeaway that you want to take away from this episode?

Luke:

Yeah, I think I'll try and keep this one simple. And that is artificial intelligence is here to stay. It's here to grow. So learn to live with it and learn to use it. I think that that's the best. I think the best thing you could do. Yep. Awesome. What about you have you do you have a key takeaway from the discussion of this?

Khush:

Yeah, I think mine is definitely not to really be intimidated by it. I was actually reflecting as you were describing some of those things. And like, you know, at the end of the day, I am not a software engineer. And I don't understand how a lot of awesome really popular software's you know, a lot of the Google Suite A lot of the Microsoft suite that we use every day for our jobs and stuff like that. I don't know how they were coded and built, but I still use them and get lots out and pretty simple do some things with them. So maybe just AI is no different. Maybe I just gotta get into it. And yeah, use the technology that's there and just get good at it getting into it. So yeah,

Luke:

I love that. Yep.

Khush:

Thank you so much for sharing our thoughts. Mate. That was awesome. Awesome. Yeah, anytime. Alrighty, well, thanks for listening. Find us at QRC crisis.com and the QRC crisis on Instagram. We're on all major streaming services so you can listen how you like before I say goodbye. I got one little favor to ask. If you got anything out of this episode, do us a favor and just send it to one friend. If you think they'll get something out of it. It'll make the biggest difference to us. Your recommendation definitely carries a lot of weight. So thank you and catch you on the next episode. Cheers. Cool.

Luke:

Thanks for listening.