Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Lock and Code

Lock and Code

Malwarebytes

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.
Share icon

All episodes

Best episodes

Seasons

Top 10 Lock and Code Episodes

Goodpods has curated a list of the 10 best Lock and Code episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Lock and Code for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Lock and Code episode by adding your comments to the episode page.

Lock and Code - Recovering from romance scams with Cindy Liebes
play

05/09/22 • 48 min

Earlier this year, a flashy documentary premiered on Netflix that shed light onto on often-ignored cybercrime—a romance scam. In this documentary, called The Tinder Swindler, the central scam artist relied on modern technologies, like Tinder, and he employed an entire team, which included actors posing as his bodyguard and potentially even his separated wife. After months of getting close to several women, the scam artist pounced, asking for money because he was supposedly in danger.

The public response to the documentary was muddy. Some viewers felt for the victims featured by the filmmakers, but others blamed them. This tendency to blame the victims is nothing new, but according to our guest Cindy Liebes, Chief Cybersecurity Evangelist for Cybercrime Support Network, it's all wrong. That's because, as we discuss in today's episode on Lock and Code with host David Ruiz, these scam artists are professional criminals. Today, we speak with Liebes to understand how romance scams work, who the victims are, who the criminals are, what the financial and emotional damages are, and how people can find help.

Show notes and credits:

Intro Music: "Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “God God” by Wowa (unminus.com)

bookmark
plus icon
share episode

In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress.

Prompted first by a user's longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said:

“I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I’m proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.”

This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide "mental health support to about 4,000 people" across "about 30,000 messages." Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience.

"The implications here are poorly understood," Morris said. "Would people eventually seek emotional support from machines, rather than friends and family?"

Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns.

"It disturbed me to see AI using 'I care about you,' or 'I'm concerned,' or 'I'm proud of you.' That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it's partially because I think that they're going to lose power as a form of connecting to another human."

But, importantly, Brown is not the only voice in today's podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share.

Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the "support" offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences.

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode

The language of a data breach, no matter what company gets hit, is largely the same. There's the stolen data—be it email addresses, credit card numbers, or even medical records. There are the users—unsuspecting, everyday people who, through no fault of their own, mistakenly put their trust into a company, platform, or service to keep their information safe. And there are, of course, the criminals. Some operate in groups. Some act alone. Some steal data as a means of extortion. Others steal it as a point of pride. All of them, it appears, take something that isn't theirs.

But what happens if a cybercriminal takes something that may have already been stolen?

In late June, a mobile app that can, without consent, pry into text messages, monitor call logs, and track GPS location history, warned its users that its services had been hacked. Email addresses, telephone numbers, and the content of messages were swiped, but how they were originally collected requires scrutiny. That's because the app itself, called LetMeSpy, is advertised as a parental and employer monitoring app, to be installed on the devices of other people that LetMeSpy users want to track.

Want to read your child's text messages? LetMeSpy says it can help. Want to see where they are? LetMeSpy says it can do that, too. What about employers who are interested in the vague idea of "control and safety" of their business? Look no further than LetMeSpy, of course.

While LetMeSpy's website tells users that "phone control without your knowledge and consent may be illegal in your country," (it is in the US and many, many others) the app also claims that it can hide itself from view from the person being tracked. And that feature, in particular, is one of the more tell-tale signs of "stalkerware."

Stalkerware is a term used by the cybersecurity industry to describe mobile apps, primarily on Android, that can access a device's text messages, photos, videos, call records, and GPS locations without the device owner knowing about said surveillance. These types of apps can also automatically record every phone call made and received by a device, turn off a device's WiFi, and take control of the device's camera and microphone to snap photos or record audio—all without the victim knowing that their phone has been compromised.

Stalkerware poses a serious threat—particularly to survivors of domestic abuse—and Malwarebytes has defended users against these types of apps for years. But the hacking of an app with similar functionality raises questions.

Today, on the Lock and Code podcast with host David Ruiz, we speak with the hacktivist and security blogger maia arson crimew about the data that was revealed in LetMeSpy's hack, the almost-clumsy efforts by developers to make and market these apps online, and whether this hack—and others in the past—are "good."

"I'm the person on the podcast who can say 'We should hack things,' because I don't work for Malwarebytes. But the thing is, I don't think there really is any other way to get info in this industry."

Tune in today.

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode

Becky Holmes is a big deal online.

Hugh Jackman has invited her to dinner. Prince William has told her she has "such a beautiful name." Once, Ricky Gervais simply needed her photos ("I want you to take a snap of yourself and then send it to me on here...Send it to me on here!" he messaged on Twitter), and even Tom Cruise slipped into her DMs (though he was a tad boring, twice asking about her health and more often showing a core misunderstanding of grammar).

Becky has played it cool, mostly, but there's no denying the "One That Got Away"—Official Keanu Reeves.

After repeatedly speaking to Becky online, convincing her to download the Cash app, and even promising to send her $20,000 (which Becky said she could use for a new tea towel), Official Keanu Reeves had a change of heart earlier this year: "I hate you," he said. "We are not in any damn relationship."

Official Keanu Reeves, of course, is not Keanu Reeves. And hughjackman373—as he labeled himself on Twitter—is not really Hugh Jackman. Neither is "Prince William," or "Ricky Gervais," or "Tom Cruise." All of these "celebrities" online are fake, and that isn't commentary on celebrity culture. It's simply a fact, because all of the personas online who have reached out to Becky Holmes are romance scammers.

Romance scams are serious crimes that follow similar plots.

Online, an attractive stranger or celebrity—coupled with an appealing profile picture—will send a message to a complete stranger, often on Twitter, Instagram, Facebook, or LinkedIn. They will flood the stranger with affectionate messages and promises of a perfect life together, sometimes building trust and emotional connection for weeks or even months. As time continues, they will also try to remove the conversation away from the social media platform where it started, instead moving it to WhatsApp, Telegram, Messages, or simple text.

Here, the scam has already started. Away from the major social media and networking platforms, the scammers persistent messages cannot be flagged for abuse or harassment, and the scammer is free to press on. Once an emotional connection is built, the scammer will suddenly be in trouble, and the best way out, is money—the victim’s money.

These crimes target vulnerable people, like recently divorced individuals, widows, and the elderly. But when these same scammers reach out to Becky Holmes, Becky Holmes turns the tables.

Becky once tricked a scammer into thinking she was visiting him in the far-off Antarctic. She has led one to believe that she had accidentally murdered someone and she needed help hiding the body. She has given fake, lewd addresses, wasted their time, and even shut them down when she can by coordinating with local law enforcement.

And today on the Lock and Code podcast with host David Ruiz, Becky Holmes returns to talk about romance scammer "education" and the potential involvement in pyramid schemes, a disappointing lack of government response to protect victims, and the threat of Twitter removing its block function, along with some of the most recent romance scams that Becky has encountered online.

“There’s suddenly been this kind of influx of Elons. Absolutely tons of those have come about... I think I get probably at least one, maybe two a day.”

Tune in today.

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode

What are you most worried about online? And what are you doing to stay safe?

Depending on who you are, those could be very different answers, but for teenagers and members of Generation Z, the internet isn't so scary because of traditional threats like malware and viruses. Instead, the internet is scary because of what it can expose. To Gen Z, a feared internet is one that is vindictive and cruel—an internet that reveals private information that Gen Z fears could harm their relationships with family and friends, damage their reputations, and even lead to their being bullied and physically harmed.

Those are some of the findings from Malwarebytes' latest research into the cybersecurity and online privacy beliefs and behaviors of people across the United States and Canada this year.

Titled "Everyone's afraid of the internet and no one's sure what to do about it," Malwarebytes' new report shows that 81 percent of Gen Z worries about having personal, private information exposed—like their sexual orientations, personal struggles, medical history, and relationship issues (compared to 75 percent of non-Gen Zers). And 61 percent of Gen Zers worry about having embarrassing or compromising photos or videos shared online (compared to 55% of non Gen Zers). Not only that, 36 percent worry about being bullied because of that info being exposed, while 34 percent worry about being physically harmed. For those outside of Gen Z, those numbers are a lot lower—only 22 percent worry about bullying, and 27 percent worry about being physically harmed.

Does this mean Gen Z is uniquely careful to prevent just that type of information from being exposed online? Not exactly. They talk more frequently to strangers online, they more frequently share personal information on social media, and they share photos and videos on public forums more than anyone—all things that leave a trail of information that could be gathered against them.

Today, on the Lock and Code podcast with host David Ruiz, we drill down into what, specifically, a Bay Area teenager is afraid of when using the internet, and what she does to stay safe. Visiting the Lock and Code podcast for the second year in the row is Nitya Sharma, discussing AI "sneak attacks," political disinformation campaigns, the unannounced location tracking of Snapchat, and why she simply cannot be bothered about malware.

"I know that there's a threat of sharing information with bad people and then abusing it, but I just don't know what you would do with it. Show up to my house and try to kill me?"

Tune in today for the full conversation.

You can read our full report here: "Everyone's afraid of the internet and no one's sure what to do about it."

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode
Lock and Code - Re-air: What teenagers face growing up online
play

09/11/23 • 36 min

In 2022, Malwarebytes investigated the blurry, shifting idea of “identity” on the internet, and how online identities are not only shaped by the people behind them, but also inherited by the internet’s youngest users, children. Children have always inherited some of their identities from their parents—consider that two of the largest indicators for political and religious affiliation in the US are, no surprise, the political and religious affiliations of someone’s parents—but the transfer of online identity poses unique risks.

When parents create email accounts for their kids, do they also teach their children about strong passwords? When parents post photos of their children online, do they also teach their children about the safest ways to post photos of themselves and others? When parents create a Netflix viewing profile on a child's iPad, are they prepared for what else a child might see online? Are parents certain that a kid is ready to watch before they can walk?

Those types of questions drove a joint report that Malwarebytes published last year, based on a survey of 2,000 people in North America. That research showed that, broadly, not enough children and teenagers trust their parents to support them online, and not enough parents know exactly how to give the support their children need.

But stats and figures can only tell so much of the story, which is why last year, Lock and Code host David Ruiz spoke with a Bay Area high school graduate about her own thoughts on the difficulties of growing up online. Lock and Code is re-airing that episode this week because, in less than one month, Malwarebytes is releasing a follow-on report about behaviors, beliefs, and blunders in online privacy and cybersecurity. And as part of that report, Lock and Code is bringing back the same guest as last year, Nitya Sharma.

Before then, we are sharing with listeners our prior episode that aired in 2022 about the difficulties that an everyday teenager faces online, including managing her time online, trying to meet friends and complete homework, the traps of trading online interaction with in-person socializing, and what she would do differently with her children, if she ever started a family, in preparing them for the Internet.

Tune in today.

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode
Parental monitoring apps give parents the capabilities to spot where their kids go, read what their kids read, and prevent them from, for instance, visiting websites deemed inappropriate. But where these apps begin to cause concern is just how powerful they can be. To help us better understand parental monitoring apps, their capabilities, and how parents can choose to safely use these with their children, we’re talking today with Emory Roane, policy counsel at Privacy Rights Clearinghouse
bookmark
plus icon
share episode

How would you feel if the words you wrote to someone while in a crisis—maybe you were suicidal, maybe you were newly homeless, maybe you were suffering from emotional abuse at home—were later used to train a customer support tool?

Those emotions you might behaving right now were directed last month at Crisis Text Line, after the news outlet Politico reported that the nonprofit organization had been sharing anonymized conversational data with a for-profit venture that Crisis Text Line had itself spun off at an earlier date, in an attempt to one day boost the nonprofit's own funding.

Today, on Lock and Code with host David Ruiz, we’re speaking with Courtney Brown, the former director of a suicide hotline network that was part of the broader National Suicide Prevention Lifeline, to help us understand data privacy principles for crisis support services and whether sharing this type of data is ever okay.

bookmark
plus icon
share episode

Earlier this month, a group of hackers was spotted using a set of malicious tools—that originally gained popularity with online video game cheaters—to hide their Windows-based malware from being detected.

Sounds unique, right?

Frustratingly, it isn't, as the specific security loophole that was abused by the hackers has been around for years, and Microsoft's response, or lack thereof, is actually a telling illustration of the competing security environments within Windows and macOS. Even more perplexing is the fact that Apple dealt with a similar issue nearly 10 years ago, locking down the way that certain external tools are given permission to run alongside the operating system's critical, core internals.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes' own Director of Core Tech Thomas Reed about everyone's favorite topic: Windows vs. Mac. But this isn't a conversation about the original iPod vs. Microsoft's Zune (we're sure you can find countless, 4-hour diatribes on YouTube for that), but instead about how the companies behind these operating systems can respond to security issues in their own products. Because it isn't fair to say that Apple or Microsoft are wholesale "better" or "worse" about security. Instead, they're hampered by their users and their core market segments—Apple excels in the consumer market, whereas Microsoft excels with enterprises. And when your customers include hospitals, government agencies, and pretty much any business over a certain headcount, well, it comes with complications in deciding how to address security problems that won't leave those same customers behind.

Still, there's little excuse in leaving open the type of loophole that Windows has, said Reed:

"Apple has done something that was pretty inconvenient for developers, but it really secured their customers because it basically meant we saw a complete stop in all kernel-level malware. It just shows you [that] it can be done. You're gonna break some eggs in the process, and Microsoft has not done that yet... They're gonna have to."

Tune in today.

You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.

For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

http://creativecommons.org/licenses/by/4.0/

Outro Music: “Good God” by Wowa (unminus.com)

bookmark
plus icon
share episode
Lock and Code - What does a car need to know about your sex life?
play

09/25/23 • 44 min

When you think of the modern tools that most invade your privacy, what do you picture?

There's the obvious answers, like social media platforms including Facebook and Instagram. There's email and "everything" platforms like Google that can track your locations, your contacts, and, of course, your search history. There's even the modern web itself, rife with third-party cookies that track your browsing activity across websites so your information can be bundled together into an ad-friendly profile.

But here's a surprise answer with just as much validity: Cars.

A team of researchers at Mozilla which has reviewed the privacy and data collection policies of various product categories for several years now, named "Privacy Not Included," recently turned their attention to modern-day vehicles, and what they found shocked them. Cars are, to put it shortly, a privacy nightmare.

According to the team's research, Nissan says it can collect “sexual activity” information about consumers. Kia says it can collect information about a consumer's “sex life.” Subaru passengers allegedly consent to the collection of their data by simply being in the vehicle. Volkswagen says it collects data like a person's age and gender and whether they're using your seatbelt, and it can use that information for targeted marketing purposes.

But those are just some of the highlights from the Privacy Not Included team. Explains Zoë MacDonald, content creator for the research team:

"We were pretty surprised by the data points that the car companies say they can collect... including social security number, information about your religion, your marital status, genetic information, disability status... immigration status, race. And of course, as you said.. one of the most surprising ones for a lot of people who read our research is the sexual activity data."

Today on the Lock and Code podcast with host David Ruiz, we speak with MacDonald and Jen Caltrider, Privacy Not Included team lead, about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter.

We also explore the booming revenue stream that car manufacturers are tapping into by not only collecting people's data, but also packaging it together for targeted advertising. With so many data pipelines being threaded together, Caltrider says the auto manufacturers can even make "inferences" about you.

"What really creeps me out [is] they go on to say that they can take all the information they collect about you from the cars, the apps, the connected services, and everything they can gather about you from these third party sources," Caltrider said, "and they can combine it into these things they call 'inferences' about you about things like your intelligence, your abilities, your predispositions, your characteristics."

Caltrider continued:

"And that's where it gets really creepy because I just imagine a car company knowing so much about me that they've determined how smart I am."

Tune in today.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Lock and Code have?

Lock and Code currently has 134 episodes available.

What topics does Lock and Code cover?

The podcast is about News, Tech News, Podcasts and Technology.

What is the most popular episode on Lock and Code?

The episode title 'Securely working from home (WFH) with John Donovan and Adam Kujawa' is the most popular.

What is the average episode length on Lock and Code?

The average episode length on Lock and Code is 41 minutes.

How often are episodes of Lock and Code released?

Episodes of Lock and Code are typically released every 14 days.

When was the first episode of Lock and Code?

The first episode of Lock and Code was released on Feb 21, 2020.

Show more FAQ

Toggle view more icon

Comments