Episode Transcript
00:08 Hey everyone, welcome to the Absolutely Critical Podcast.
00:10 I'm your host, Lee Mangold.
00:12 So to no one's surprise, the AI revolution is here.
00:16 uh Deepfakes, voice cloning, AI-generated images uh are making impersonation attacks more
scalable, more believable, and honestly, harder to disprove in the moment.
00:27 The biggest IT risks organizations today are no longer primarily just limited to
technology, but how attackers are exploiting
00:35 normal human instincts like urgency, authority, and familiarity.
00:39 And that's exactly what we're going to talk about today.
00:42 So my guest today has been a longtime friend and colleague of mine, James McQuiggan.
00:47 James has over 25 years of cybersecurity experience where he specializes in human risk
management, artificial intelligence, and the crossroads of social engineering and AI.
00:56 As a former CISO advisor at KnowBefore, he delivered thought leadership on topics
including
01:02 human risk management, social engineering, ransomware, and more recently, the threat
landscape of artificial intelligence and how that overlaps.
01:11 Through industry conferences, webinars, and media engagements, he translates complex
security concepts into actionable insights for diverse audiences.
01:20 His extensive background includes a senior cyber security roles at Siemens Energy and Wind
Division.
01:27 with expertise spanning cybersecurity standards, incident response, and industrial control
system security.
01:32 James also serves as part-time faculty at Full Sail University teaching cyber threat
intelligence.
01:39 He is also, and this is how James and I actually ended up meeting for the first time,
dedicated community leader volunteering with ISE Squared as the co-chair of the North
01:49 American Regional Advisory Council and the chair of the Southeast Regional Management
Committee.
01:57 following eight years as the president and founder of the Central Florida ISC Squared
Chapter.
02:03 James, happy to have you.
02:05 Welcome to the podcast.
02:07 Wow, that was some bio you put together for me, Lee.
02:11 I am just blown away and impressed by all the research.
02:14 I didn't read a single line of it.
02:16 Yeah, was.
02:17 Yeah.
02:18 Yeah, no, thanks for having me on.
02:21 I'm real excited to always excited to be here.
02:24 And it's always a pleasure to chat with you, whether it's on a podcast, at a conference,
hanging out in the street, wherever it may be.
02:31 Yeah, my only question is why you didn't mention me in your bio, but we'll get to that
later for sure.
02:37 Yeah.
02:38 So James, uh let's get right to it.
02:42 As I mentioned in the bio and as you and I have talked a lot about, uh you've spoken a lot
about AI over the last few years.
02:49 And your area of expertise, your real focus area, really is on that human factor side of
AI.
02:55 When I hear this, I immediately think of fishing style attacks.
02:60 but also more recently, we've been seeing and hearing a lot about deep fakes and AI image
generation.
03:07 I guess, know, my big question, and we know a lot of this, we've seen a lot of this, but
how are we seeing this in a business context?
03:16 Yeah, from the business perspective, when it comes to artificial intelligence, especially
when we look at the social engineering, because for me, cyber criminals are going to
03:28 target the humans because it's a lot easier to attack them than it is the technology.
03:33 mean, yes, there's plenty of zero days and exploits and vulnerabilities that are out
there.
03:39 You know, you got to patch it, get the systems patch and all that.
03:42 But.
03:42 A lot of the time, cyber criminals find their way into those organizations through the
human.
03:47 ah For me, anybody that's got an email address in your organization has a key to the front
door.
03:54 So think about your own homes.
03:56 Think about who's got keys to your house.
03:58 Or maybe you've got one of those thumbprint readers or keypad entries or Bluetooth, you
know, uh door lock.
04:03 But think about who's got a key to your front door.
04:06 You've got to think about that with your organization as, you know, security leaders and
everything else.
04:11 who's got a key to the front door, who's got access, whether it's users or agentic AI,
know, all the different API calls that you might be doing to third party vendors or
04:22 external systems or even inside your own network, looking at all of those identities and
cyber criminals are looking to the human again because of the we'll say a soft target.
04:33 They look at them as a soft target that they can convince.
04:37 They can hit them with an emotional response.
04:39 They can hit them with a hierarchal position pretending to be a CEO, CFO, manager.
04:48 And get them to take a certain action because they're the CEO and I need you to do this.
04:53 And it's funny because, well, it's not funny as in ha ha, it's funny as in ironic because
we were dealing with this six, seven years ago when you would have, and it's probably
05:05 still going on, where you would have
05:08 leaders and organizations reaching out to employees going, hi, I need you to go get some
gift cards for a client or whatever.
05:17 But the whole gift card scams with Apple.
05:19 mean, what CEO needs Apple gift cards to give to their clients?
05:22 But, you know, we're past that.
05:24 We got to the point where you got policies in your organization that go your CEO will
never contact you to go buy gift cards.
05:32 But now we're back around what's old is new again.
05:35 And that's kind of the
05:37 The theme with a lot of the different AI attacks that are out there is what's old is new
again.
05:41 We've got cyber criminals leveraging hierarchical scams essentially to get employees to
take action because of I said so and I'm your boss and I can get you fired.
05:52 Okay, I got to go get you.
05:54 I got to go transfer the money.
05:55 And we've seen number of instances in some big stories over the last number of years where
you you had an employee get a email.
06:06 and a text message that came from the CEFO asking about wire transfers and wanting to
transfer money.
06:13 And the employee pushed back and was very skeptical, but they followed it up within a
video call.
06:20 And when you get a video call and you see the person there and LK might be a little
distorted because, you know, we know all the video software that's out there is, you know,
06:30 you know, going to show you a perfect, you know, high resolution.
06:33 if you're trying to say there's ever anything wrong with teams, I'll hear nothing of it.
06:39 Okay, okay.
06:40 you know, I mean, there's a lot of different platforms out there.
06:42 I've had the pleasure of playing with all of them, so to speak.
06:45 You know, all of them, you know, won't give you that full resolution or you could be
going, oh, I've got a bad connection.
06:50 I'm connecting on my hotspot and your image is distorted and the voice is breaking up or,
you know, the voice doesn't sound right.
06:57 Oh, I broke.
06:58 I hurt my nose the other day or I've got a cold.
07:01 That's kind of one of the fun ones.
07:02 But they get on the video call.
07:04 And it makes it that much more believable for the end user.
07:08 Like, a video call.
07:09 OK, all right.
07:11 But they're not looking at where the video call is originating from.
07:13 They're connecting into it.
07:14 Maybe a Zoom link or it looks like a Zoom link.
07:17 It's not zoom.us.
07:19 It's zoom.com or whatever.
07:21 And they're, you know, connecting in on that.
07:24 That unfortunate incident caused the organization to have twenty five million dollars go
walk out the door, not on one transaction.
07:34 on nine, nine different transactions totaling $25 million.
07:39 It was a wake up call.
07:41 It was a shock.
07:42 The I know the CISO for that particular organization has spoken several times and said it
was an extremely sophisticated attack.
07:50 And you're right.
07:50 It was because they were layering multiple different attack vectors.
07:55 They were doing chat.
07:56 They were doing phishing.
07:57 They were following up.
07:58 They were letting them know, hey, you're going to get a video call.
08:01 Here's the link.
08:02 So it plays up.
08:03 We look at it as cybersecurity professionals going, all right, well, you shouldn't have
believed it from the chat.
08:08 But when you're dealing with those kind of environments.
08:11 Now, we have had organizations be impacted where you had the CFO get a phone call from the
CEO.
08:19 And this is of a large luxury car manufacturer.
08:23 And essentially, the CEO is asking him, got to transfer this money.
08:28 I'm, you know, same stories.
08:30 What's old is new again.
08:31 I'm getting on a plane.
08:32 I'm not going to be reachable.
08:33 I need you to do this transfer.
08:34 This has got to get done.
08:36 And the CFO took a beat and said, OK, I can get it taken care of really quick.
08:44 You recommended a book to me the other day.
08:46 What was that?
08:49 and that just threw the scammers and they were able to essentially they were ended up
hanging up and he felt there was something just slightly off and essentially allowed you
09:03 know him to see through it and the cyber criminals you know didn't get away with anything
but we're seeing more and more of these attacks happening where cyber criminals scammers
09:13 are trying to come in through the employees leverage the
09:18 hierarchy positions, your CFO, CEO, manager.
09:22 For years, we had business email compromise and you would have scammers, cyber criminals
get into email systems of a vendor of an organization that had customers and clients and
09:35 then send off those updated uh account changes.
09:41 I think we're going to get I think that's the next level.
09:43 I think that's where we're going next.
09:45 I think we're going to start.
09:46 to say it.
09:48 suppliers and vendors coming in on deepfakes to get them to change information rather than
an email.
09:55 I was going to say there, it just sounds to me like uh that same attack uh vector, right?
10:02 If you can compromise somebody's Microsoft account, you get on their teams, right?
10:07 That was always, it's one of the things that we know obviously very deeply in IR, is when
you think somebody's account is compromised, don't try to contact them through the account
10:17 that you think is compromised, right?
10:20 You have to have uh that other channel.
10:23 uh
10:24 Yeah, so, and we use several different ones at Fortress, but I think it's an interesting
idea of having sort of that code word.
10:31 I don't know that that scales very well, but I do like that.
10:34 I do like that.
10:36 Code words are good, but it depends.
10:38 And you're right.
10:39 It all depends on how it scales.
10:42 That's too many code words for people to try to remember.
10:45 Code words are good for family, for small circles.
10:48 You got a small business?
10:49 Sure, go with the code word.
10:51 You got a family?
10:52 Go with the code word.
10:53 I know for certain.
10:55 I'll put money on the table.
10:57 My family and my, I'll see my cousins here in the U.S.
11:02 And if they listen to this, they know exactly what I'm going to talk about.
11:05 But there is and we've done this before because a lot of us are involved in security or
fraud or investigations.
11:11 And so we're very aware.
11:15 And in the event that one of us reaches out and go, hey, look, I've been in a car
accident.
11:20 Can you send me some money?
11:22 We know that we have a code word and all I got to do is say two words.
11:26 And if I don't get the right two words back, I know it's not them.
11:30 And I've actually done it where I because it reached out to me.
11:33 And it just seems a little weird.
11:34 goes, James, it gives me the two words.
11:37 It's really me.
11:37 And I'm like, hey, I'm just checking.
11:39 It just seemed kind of weird.
11:41 So yeah, it worked.
11:44 Oh, what other two words?
11:45 Get lost.
11:46 Yeah, there you go.
11:47 right.
11:48 You know what?
11:48 I would actually believe that.
11:50 Yeah.
11:52 No, but usually asking questions is what's going to trip up the scammers and whether it's
family members or coworkers or your boss or the CEO asking that question has been what
12:06 seems to be working time and time again because they're not going to know the answers and
especially if it's something unique and only that person would know or maybe they don't
12:15 know and you're just looking to trip them up.
12:19 The questioning is going to go, be curious, be skeptical.
12:23 That's going to go a long way in helping trip them up.
12:26 Yeah, and you know, I think that's the one of the pieces of advice that I would always
give.
12:29 And it was so funny because I had to be given this advice a number of years ago uh when
you're talking about like, is suspicious activity?
12:38 And I remember when I was working for the army, were saying, you know, we always had to
these suspicious activity reports and things like that.
12:45 And I remember asking that question.
12:47 And I remember the security manager at the time telling me, the obvious, well, if you
think it's suspicious, it's suspicious.
12:54 Well, that's...
12:56 Okay, that's obvious answer, but okay.
12:58 And actually I understand that, right?
13:00 It was kind of funny, but yeah, I think that having that healthy suspicion is good and it
is going to be part of society going forward, right?
13:10 You know, I mean, yeah.
13:13 intuition is based on your cognitive biases and everything else.
13:16 And that's all based on your experience, your history, your awareness, what you've seen,
what you've heard, what you've read.
13:21 And that goes into it.
13:23 mean, nowadays, if you and I were hanging out in New York City and a guy walks up to us
and opens up his jacket and goes, hey, I got a Rolex.
13:30 You want to buy it?
13:31 You and I are both going to know.
13:33 And probably most of society are going to go, yeah, no, we're good.
13:36 We know those are fake because we've heard those stories.
13:39 But when it comes to a lot of the deep fakes, a lot of folks have heard about it, but
haven't experienced it or known anybody that has experienced it quite yet.
13:48 So it's still got a ways to go.
13:50 And you know, I...
13:51 uh
13:54 I want to talk about sort of how to prevent some of this, but the first time I really saw
this kind of in action was a presentation that Rachel Toback had given, and just showing
14:07 just how insanely easy all of this actually is.
14:11 And the more video, the more pictures of somebody you have, the more audio of somebody you
have, the more data you have to work with, the higher fidelity that model gets and that
14:21 image gets, right?
14:22 uh
14:24 And you know, I think that's a scary thing because what are we doing right now?
14:29 Right, we're sitting here on camera, we're talking, you didn't need this much audio of
either of us to do anything, but here we are.
14:37 So I mean, what are we left to do in those kinds of situations other than just being
skeptical?
14:46 Yeah, mean, you really only need about 10 seconds.
14:50 They're saying three to get 85 % accuracy.
14:55 Being in IT, being in cybersecurity, being the people that we are in this industry, we
know the more data you have, the better model, the better decision, the better whatever
15:07 you can make.
15:08 So for me, when it comes to doing deep fakes that I've done dozens of.
15:14 I've made them from still images.
15:15 I've made them from video that's already out there on YouTube where I'm able to sync their
their lips.
15:22 The software basically just changes the whole mouth area and matches to the audio that I
have generated where the audio I only need.
15:31 I only give it about 30 seconds.
15:33 It doesn't take a lot.
15:35 So, you know, what we've talked about already is plenty for the data now to do real voice
content, real time voice content.
15:43 you need a lot more because it has to be able to build it in real time versus process it,
put it in a video and then you get it out.
15:49 But when it comes to doing uh deep fake face swaps, which is kind of the what they use uh
in a lot of the attacks or they will just use pre generated videos through it.
16:02 But to do face swaps, you do need some higher technology.
16:05 You need systems that have a higher graphics card, more RAM and processing capabilities,
kind of like the laptop that I'm using here today.
16:12 because it allows me to do this.
16:15 And uh for those who are, it's awful.
16:18 So for those who are listening and not watching, James has now changed his face to look
like mine and you put up, okay, that is terrifying.
16:27 ah So for those who are watching that, uh James, you could stop that at any time right
now.
16:34 That'd be great.
16:35 oh
16:37 I can also be uh Robert Downey Jr., too.
16:40 Yeah, and you know, kind what I'm noticing there is, know, I would not fall for that
trickery.
16:46 And I think that's that healthy, number one, right now, that's the healthy skepticism, but
that's just a matter of time.
16:54 But the other one I love to throw out, this program that I have is great, because ah the
other person I love to throw up here is this guy.
17:03 Yeah, so I think it's really interesting, right?
17:08 Like and that's and that was live.
17:09 That's off your laptop.
17:11 That is that is sufficient to to fool quite a lot of people uh and and there wasn't a
whole lot of you.
17:19 You clicked a couple buttons on that one, right?
17:21 Like we can and.
17:22 could click another one and I could be sounding just like Ira or just like you.
17:25 But I have to build that model and that does take a bit of time.
17:28 But once I have that model and I'm set, then I could jump into a Teams call over at
Fortress and tell whoever your CTO or your CEO and just tell them that, you know, I've had
17:41 enough fun.
17:42 I think I'm going to go on my own now.
17:43 I'll see you all later.
17:44 You can keep my stock options.
17:45 I don't want them.
17:46 then.
17:47 Yeah, let's maybe not do that.
17:49 Let's maybe not do that.
17:50 not.
17:51 No, I do like my career where I am and I do like not being in jail.
17:55 So.
17:57 But you talk about.
17:58 The information that's out there, how do we protect ourselves?
18:02 We are sitting ducks, so yeah, quack quack and all the biggest ways possible.
18:07 And we kind of touched upon it already in the fact that code work code words can work, but
you basically.
18:16 are going to have to rely on questions.
18:17 You're going to have to have some skepticism because they're not only generating videos of
real people, but we're leveraging things like Gemini or Kling AI and creating realistic,
18:32 ultra realistic videos of events that are going on.
18:36 Now, I wouldn't be surprised.
18:38 I haven't looked yet, but I would have no doubt that there are probably
18:43 several videos circularly circulating around on social media of the events that are
happening in Minnesota.
18:50 Sadly, the events that are going on.
18:53 We've already seen incidents with relating to the Iraq-Israel war.
18:57 We saw a lot of deep fake videos yet come out and then get uh debunked by several forensic
companies that do analysis of deep fakes and put that information out there.
19:09 Unfortunately, a lot of those deep fake
19:11 detection tools aren't readily available for the consumers, for the public.
19:16 They are readily available for journalists, media, police, law enforcement, and so forth.
19:22 There are some technologies and software that's available for organizations, enterprise
organizations, because that's who they're catering to.
19:29 What I'm looking forward to and we need as a society is in the social media, because a
majority of our society is now getting their news from social media.
19:41 And when you've got things like Gemini creating essentially deep fake videos, synthetic
videos of, know, we had when the L.A.
19:49 riots were going on, we had the uh National Guard, Bob was his name, shoot, you know,
doing 10 second videos from behind the scenes, you know, things going on, which were all
19:60 completely fabricated and all completely inaccurate.
20:02 But those disinformation campaigns is what's also being targeting towards industries.
20:08 But you could also leverage it for
20:10 businesses as well for corporate espionage.
20:13 could have it where pretending and it doesn't have to be like an Apple, a Facebook or a
big company.
20:18 It could be a small business where you put a deepfake video, you're bidding for a
contract, let's say, and you make a deepfake video of the CEO's, you know, ranting off on
20:29 um racial slurs, you know, and, you know, doing, you know, pretending, you know, putting
them at the scene in Minnesota, you know, you know, going undercover as
20:40 you know, ice or whatever, you your your imagination can run with it.
20:45 But if they wanted to put something out there and make that deep fake video and put that
out there to damage the reputation, that's going to be huge.
20:53 And without a real way for social media for us to come back and go, we recognize this is a
deep fake.
20:58 uh We're going to be running into some problems soon.
21:02 Yeah, it's a whole new level of reputational harm risk.
21:08 And it's not to be lost on me or anyone else that even when we're talking about
disinformation campaigns at that national level, and even when politics gets involved and
21:19 all of that, it shouldn't be lost on anyone that a big percentage of businesses in this
country are part of the defense industrial base.
21:27 And seeing a campaign come out where it's
21:31 something's fabricated, you may be taking direction from, you might be taking the wrong
direction, right?
21:38 Yeah, so it's very, very interesting and it is an interesting road ahead.
21:44 I absolutely agree with that.
21:46 um So I kind want to pivot a little bit here.
21:50 You and I talked a little bit in the past about how AI is getting involved in GRC space.
21:58 Talk a little bit about that.
22:01 Yeah, when you think about, you think back three years, 2023, we've got chat GPT launched
on the scene and, you know, 100 million users or however many it was, you know, using it
22:13 quicker than any other tool.
22:15 And people start looking for ways that they can incorporate that into their organization.
22:20 You know, early on it was like, hey, I got it to write a phishing email or hey, I got it
to write malware.
22:24 Now we're looking at it.
22:25 It's like, how can it help me be more efficient in my role?
22:28 You've got
22:29 You had programmers uploading source code uh into it.
22:33 Big mistake.
22:34 So nowadays it's like, OK, how can we use generative AI, its capabilities, machine
learning, agentic AI within the GRC space?
22:44 And so I know that we're starting to see organizations slowly work into it, between 40 and
50 % of organizations trying to use it where it can ingest all of your
22:57 you know, all of your data, all of your evidence and information, and then it can go
through and do that, you know, compare, do that audit for you and come back with
23:04 information, help you write policies.
23:08 I was sitting on stage at a conference number of years.
23:10 I think it was in 23.
23:12 And somebody said we need to do with AI, generative AI needs to be in a data policy and we
need to get a data policy written.
23:18 And I asked you to write me a data policy and then showed it up.
23:21 And I said, all right, I got the policy.
23:23 Now what?
23:24 He wasn't very happy with me.
23:25 I don't think he
23:26 Yeah.
23:26 as a joke.
23:27 But, you know, for me, and this goes back to what I talk about with my students at Full
Sail with Cyber Threat Intelligence.
23:36 Chad JBT is a tool.
23:38 It's not there to write, do your homework for you.
23:40 It's not there to write your policies for you.
23:42 It's there as a tool.
23:43 It's there to get you started, get you baselined.
23:46 You know, you can have it ingest all your other policies.
23:50 And depending on, you know, how public that information is, you might do it.
23:54 where you do an on-prem large language model.
23:57 If you've got a relationship with Microsoft where your information is contained or with
Google, same thing, have it ingest in there and go, OK, based on all these policies, we
24:06 need one for AI and here's all the requirements, everything else.
24:08 So you can start looking at that.
24:12 Start being able to leverage it to be able to find the areas that you're missing evidence
or you need more information of.
24:22 Because it's allowing you to
24:24 you know, hopefully be more efficient in that risk monitoring, risk reporting, you know,
maybe even trying to automate a lot of that, you know, compliance workflow, evidence
24:33 gathering, you know, the reporting, the audit prep, trying to get everything ready.
24:36 uh It could be something where you could literally have it all inside of a like a notebook
LM and your auditor shows up and goes, all right, ask it any question.
24:44 It'll give you what you need.
24:45 mean.
24:47 Yeah, yeah, exactly, you know.
24:49 I can think of probably a dozen different ways in which it would be interesting.
24:54 One of the things that you see a lot of and you mentioned it is using generative AI to
develop a policy.
25:02 For those of us who spend way too much of our time and careers writing policies and
procedures and standards and guides and things like that, on one side, feel like
25:14 I've done my time and I know how to do this and I don't mind some help.
25:18 But on the other side, it's really easy to fall into that exact same trap that we always
see where somebody wants to get insert name of certification here, SOC 2, right?
25:30 You can go online, do a Google search for SOC 2 policies.
25:33 You can download them right now and they're technically all SOC 2.
25:36 They'll get you through.
25:38 Doesn't mean you do any of it, but the policies will get you through.
25:41 And I do have a little bit of a worry about that in particular because, yeah, ChatGPT,
Claude, Gemini, all of them, they all have the ability to pull all that information and
25:54 just write you really great policy.
25:56 But yeah, ultimately, it is back on you to figure it out.
25:58 um Yeah, right.
26:02 of different frameworks out there already, NIST has got one, there's an ISO cert you can
get it.
26:07 For any of your listeners in the EU, you know you've got the EU AI Act that's over there
as well.
26:13 And we're seeing more and more.
26:14 mean, states are trying to come up with their own AI policies, but the governance is a key
aspect when it comes to it and being able to leverage AI for your governance as well is
26:27 going to go along.
26:28 I am actually excited about a lot of the AI implementations inside of a GRC platform.
26:35 Things like, are there any risks that I'm missing?
26:38 Or here's my environment, what are the risks?
26:42 And it's not going to give you the 100 % answer, but that's why I have a job still, right?
26:48 Because I don't need the 100%.
26:50 Show me those gaps where I might have missed something.
26:54 I've done that a little bit in the recent past and it has been very very we'll say
instructional for sure So yeah, I'm I'm I'm really really interested in in the future of
27:05 that.
27:05 I think there's a lot of power
27:07 Of course, the one thing to remember with all of this is that human in the loop is the fun
term buzzword now, that human oversight.
27:14 There's but for me, and I've been saying this for three plus years when it comes to AI,
when I when I fly and I get on the airplane, I look to the left and I see two pilots, even
27:24 though I know that plane can taxi, take off, fly and land all by itself, an autopilot.
27:30 But you need the pilots there.
27:32 Same thing with our AI programs.
27:34 we still need the human, may not need as many in which that's a whole other topic, but you
still need that verification, that human oversight with regards to especially on large
27:43 decisions because we've seen where some people, some organizations, IT folks put AI and
said, hey, optimize our database.
27:51 And it said, OK, and it wiped it out.
27:55 That's that's optimized.
27:57 Yeah.
27:58 Yeah.
27:58 queries are really fast then.
28:00 Yeah.
28:01 Well, James, I really appreciate you being here, buddy.
28:04 um I want to give you the opportunity to wrap up here.
28:09 if there's that one piece of advice, and I think we could all use just one piece of
advice, right?
28:14 But that one piece of advice to send us home with, what do you got?
28:19 You know, we kind of need our own zero trust with everything.
28:22 And again, I know buzzword, but it's OK to be politely paranoid.
28:27 It's OK to be skeptical.
28:28 are going to see.
28:30 Sadly, it's going to get a lot worse before it gets better on the deep fakes and
detections.
28:34 If it's too good to be true, you know, if you're reacting to it emotionally and you're
wanting to share it out because, my God, this horrible thing or whatever else, take a
28:44 moment and kind of look at it and go, OK, is there.
28:48 Why am I freaking out over this?
28:50 Is this something I should share?
28:51 Can I verify this?
28:52 uh We've seen it with images.
28:55 We've seen it with videos already happening.
28:57 But society has a tendency to believe immediately what they see and what they hear.
29:02 And we have to at least now kind of have that trust and verify or verify then trust it
essentially overall.
29:12 Get those verification systems in now where, you know, with your family, maybe it's a code
word, maybe it's a question, because it's not hard for cyber criminals or scammers out
29:21 there to go find a target and then go look at their kids and get their voice from social
media because it's readily out there.
29:28 Again, they only need 10 seconds really to be able to generate it.
29:33 scary stuff.
29:34 Well, James, thank you so much for being here.
29:36 Thanks for being on the podcast, Where can people find you online,
29:42 Certainly on LinkedIn, I'm posting on a regular basis.
29:44 ah I do have my website, jamesmcquiggin.com.
29:48 My email, my contact info is out there, but LinkedIn is kind of the best place where
you'll find me on a regular basis.
29:54 Posting and sharing stories and information, thought leadership.
29:59 Awesome.
29:59 Thanks again, James.
30:01 Everyone, that is it for this episode of the Absolutely Critical Podcast.
30:04 If you enjoyed the conversation today, subscribe, share it with others, and we'll see you
next time.
30:09 Take care.
30:10 Bye-bye.
30:15 Okay.