Wireless threats are easy to overlook, and attackers count on that.

In this episode of Mitiga Mic, Field CISO Brian Contos talks with Brett Walkenhorst, CTO of Bastille, about how wireless attack techniques like Evil Twin and Nearest Neighbor are used to gain access to protected environments. They discuss how these threats show up inside data halls, executive spaces, and high-security facilities, often bypassing traditional network defenses.

Walkenhorst shares real examples of hotspots hidden in server racks, unmonitored Bluetooth connections, and USB-based implants that quietly send data out over cellular networks. He explains how attackers use patience, proximity, and overlooked gaps in wireless monitoring to get in and stay in.

Every other week, Mitiga Mic delves into the human side of cybersecurity in today’s cloud-centric landscape. For example, last episode, Idan Cohen shared insights on the security challenges of CI/CD pipelines. Join our host Brian Contos as he engages in candid conversations with CISOs, researchers, engineers, and other security leaders, uncovering their stories and strategies for staying ahead of modern threats.

Want more expert insights?

Subscribe to Mitiga on YouTube to explore past episodes and stay ahead of what’s next in cloud and SaaS security with Mitiga Mic.

Transcript

Brian Contos

Brett, welcome to Mitiga Mic.

Brett Walkenhorst

Hey Brian, thanks for having me.

Brian Contos

So Brett, we’ve done some podcasts in the past together, but for those of our listeners who aren’t familiar with you, if you could quickly introduce yourself—a little bit about your background and what it is that you do.

Brett Walkenhorst

Sure. I’m the CTO at Bastille. Bastille is a wireless security company. My background is electrical engineering. I got a PhD in wireless communications from Georgia Tech. Did a lot of work at Georgia Tech, running the GTRI software-defined radio lab. I worked in a number of different companies including Raytheon Technologies. Now I’m working at Bastille as the CTO, running R&D for the company—developing mostly threat research, helping to improve the company’s abilities to automate detection of threats and identify malicious behaviors as well as suboptimal configurations, that kind of thing.

Brian Contos

Great. Well, I wanted to talk about a couple things today. First, I wanted to talk about some very popular attacks. They’ve been in the media quite a bit. You’ve spoken on them. I’ve spoken on them a little bit. And then I want to dive into data centers. But let’s kick things off.

A lot of our viewers have probably heard of the evil twin attack, but go ahead and take us through exactly what that is—because I think the way you explain it, and given your background and experience through Bastille as well, kind of puts you in a very authoritative position to talk through why this thing is so bad and why we should be concerned.

Brett Walkenhorst

Okay. Sure. There are a number of different ways that an evil twin attack can be instantiated, but the one I talk about probably the most is where I take some COTS hardware—it could be purpose-built for this function, or it might be something flexible, just a Wi-Fi interface—and I can program it to do any number of things. There are open-source code repositories I can use to instantiate an evil twin attack.

The way I like to think about it is: I’m going to emulate, with my rogue device, an access point on a network that is protected. I’m going to target that network to try to extract credentials. One of the simple ways is to just set up an AP, send out beacons, and get some client to try to connect to my access point.

At that point, I can do any number of things. If they connect—especially if it’s not a protected network—it’s easy. I can just grab them and start interacting with them, maybe compromise the client device. But a more dangerous approach long term would be to trick a user into divulging their credentials to me.

A simple thing would be: once they join my network, I offer them a splash page that says something like “we’re undergoing maintenance.” I put a logo on it, make it look convincing, and ask them to reestablish their credentials just to make sure everything’s on the up and up. If I’ve convinced them, I get the credentials right away.

Maybe I don’t, and then I have to check against whatever they send me. I can do this either actively or passively—having acquired a four-way handshake, I can check the credentials to see if they pass muster. If not, I refresh the splash page and say, “Thanks for trying. That was the wrong password. Try again.”

Eventually I get them to give up their credentials. Now I’ve got access to the protected network, but I can also maintain my position as a man-in-the-middle to compromise that device, provide network services, modify the data that’s going back and forth—whatever I want to do. But the most important thing is I now have access to that network.

Brian Contos

I like how you said at the very beginning—you can buy a purpose-built tool, but you don’t need one for this. You can have a wireless access point. A lot of the attacks I see use the Pineapple, for example, which is one of those purpose-built solutions. A lot of it is because you can overpower it, right? So you show up higher in the list of wireless access points, they click on it, and that’s by design. There might be some deauthentication packets being sent so people have to reconnect, and they want to reconnect to the one that’s more powerful.

But that’s a nice little package. You plug it in and hit go. You don’t need to do that, though, because like you said, there are so many repositories.

If somebody has the technical aptitude—not everyone’s you—but if someone says, “I’m going to try to put this together,” what’s the out-of-pocket cost to build their own version of this thing?

Brett Walkenhorst

I haven’t looked at these devices recently, but definitely less than 100 bucks. A Pineapple you can have for maybe two or three hundred. But a standard interface—I want to say 40 to 60 bucks, somewhere in there. It’s very inexpensive. To build it yourself, maybe a couple hundred. So, sub-100 to build it yourself, and a couple hundred to buy it in the box. Just plug it in.

Brian Contos

And as you say, you can do the work to make that cheaper version work. It’s not that hard. Takes a little more technical acumen. Or you can buy the out-of-the-box version—it’s GUI-based, you set it up, hit go.

And it’s crazy how simple it is. We’ve probably all seen it without knowing it—at hotels, where you see 50 wireless access points, or any kind of business park.

Interesting. So, evil twin—lots of content out there about that. A really popular, easy, inexpensive type of attack. But let’s pivot to Nearest Neighbor, which I think is really a story about persistence. How persistent hackers can be, and how creative they can be in order to go after their target.

Maybe share a little bit about Nearest Neighbor?

Brett Walkenhorst

Sure. Let me just back up for a minute. Transitioning from evil twin to nearest neighbor highlights one of the interesting things about the nearest neighbor attack.

With evil twin, you have to be within earshot. The client has to be able to hear my fake access point’s beacons, and I have to hear them. So I have to be close enough physically to pull off the attack.

But then, a couple years back, there was an interesting attack publicized in the news where someone had forward-deployed some hardware—on drones, and landed those drones on the rooftop of the target organization. They conducted the attack remotely. So now the attacker didn’t need to be physically close but had to deploy hardware. Still, the cost is relatively low.

Then comes along this idea of the nearest neighbor attack. These guys were very patient and very creative. They leveraged existing wireless devices to conduct a similar kind of attack.

They wanted to penetrate a target organization’s network. There were some things that were suboptimal in terms of security, but it was mostly a pretty well-locked-down organization. They used a password spraying attack, were able to guess and establish that they had the proper credentials, but MFA prevented remote access via public-facing services.

So what did they do? They attacked organizations nearby—physically nearby. When I say neighbor, I mean across the street or just down the road. They penetrated a neighbor’s network, found a device on that network that was dual-homed so they could pivot to the wireless domain, and used that to penetrate their primary target’s Wi-Fi—because the credentials were shared and Wi-Fi didn’t have MFA.

It was really creative, but also not that hard, considering how many billions of wireless devices are out there. If I’m an attacker, I just need to get onto one of those devices and use the wireless interface as part of my attack.

Now, I don’t have to be anywhere close. I can be across the ocean—as these attackers were—and conduct a successful network penetration.

Brian Contos

It’s fascinating to see this vector evolve to what it is now. Wireless doesn’t need you to be close anymore. You just need to exploit something that is close—a wireless attack from thousands of miles away, which is really interesting.

And in the case you cited—and we’re talking about persistence—what I thought was most interesting is this: they try to get into organization A and can’t. So they get through a wired network into organization B, which is also wireless, and use that to connect to organization A.

Then they lose connectivity or get booted out of organization B. That doesn’t deter them. They attack organization C, which is also nearby, and do the exact same thing.

Poor organization A. They just can’t win. But it shows how persistent these attackers can be. And often you’re putting so many controls into your traditional network environment, compared to your wireless—because you think, “Well, you’ve got to be close. Who’s going to do that?” This changes the whole paradigm. Very interesting.

Brett Walkenhorst

Yeah. Agreed.

Brian Contos

So, I think that gives some of our viewers who maybe weren’t as familiar with these wireless airspace issues some background.

But as we said at the beginning, I’d really like to double-click on data centers. They’re sprouting up everywhere—especially now with a lot of AI investment. We all know the big cloud providers.

Data centers are a thing, and they’re a targeted thing. But maybe not everyone’s thinking about the wireless airspace in terms of how they’re being targeted. What are the risks there? Maybe you could even share some anonymized stories of what you and the team at Bastille have come across?

Brett Walkenhorst

Sure. We’ve done quite a bit of work with data centers over the years, and we’ve learned a lot. I don’t know that we know all of the weaknesses, but the thing that bothers me most when I think about data centers is data exfiltration—and wireless is a beautiful way to do that, because for the most part, we’re not paying attention to it.

For example, we have tools in Wi-Fi networks—if you enable them and use good products—that help identify when a network penetration attack is taking place. They’re more or less effective. But what happens if we’re using Wi-Fi in a configuration that’s outside that purview?

There’s nothing to stop a hotspot from popping up, and often no monitoring going on to identify that as a problem.

What we’ve seen, for example, is client devices inside a data hall—rack-mounted devices—that somehow have a wireless interface on them. They connect to a mobile hotspot whenever it comes into the data hall and remain active for some amount of time. All that traffic goes back and forth—probably over cellular infrastructure and then to who knows where.

None of that is being monitored. No flags. This can go on for weeks or months.

We have this big hole in our firewall, and we’re just not paying attention to it. But this stuff happens.

We’ve seen it when we put Bastille into an environment like that. Because Bastille monitors all of the wireless activity, now we see, “Oh, there’s a hotspot that came in and connected to a client. That client doesn’t move, so it’s probably in a rack somewhere.” Now you’ve got a flag—“Oh crap, better do something about that.” You can find the device and disable it or dig deeper.

Without that visibility, you just don’t know it’s happening.

Brian Contos

And to your point about “monitoring all the stuff”—correct me if I’m wrong—but Bastille’s software-defined radios aren’t just watching traditional Wi-Fi. They’re watching ZigBee, Bluetooth Low Energy, and a million other things within the spectrum, right? And doing that constantly.

That’s the most unique thing. People walk around all the time with new Bluetooth audio or video devices. Even in secure locations, the chances of something popping up are pretty high, right?

Brett Walkenhorst

Right. That’s a great point. I’m glad you pivoted to that.

I started off talking about Wi-Fi because we were discussing Evil Twin and Nearest Neighbor, and we’ve seen that behavior in data centers.

But Bluetooth tethering is another mechanism for data exfiltration. And nobody is paying attention to Bluetooth.

So now if you see a Bluetooth network pop up, and there’s something static in the data hall—and maybe the other end of it isn’t even visible or is outside the monitored environment—you’ve got a data path that’s completely outside the purview of any monitoring system, other than a wireless monitoring system like Bastille.

You have to bring visibility to that. Otherwise, someone clever, using maybe a little pair of dongles they bought for 50 bucks, has just bypassed your entire security infrastructure.

Another thing we see is clients switching networks. They’re on a secure internal network, and then they switch to something else. Maybe it’s unsecure, maybe it’s a hotspot, maybe something else. They can use that hopping behavior. Sometimes we see it persistently—switching between two or three networks.

We don’t see the IP layer, so I can’t say for sure the data is being exfiltrated—but it’s risky behavior I wouldn’t want to allow, especially in a high-security area.

Brian Contos

And at least it’s suspicious. Something that would warrant further investigation. But you wouldn’t see it through EDR, your firewall, or IPS. This is unique to spectrum-based detection.

Brett Walkenhorst

It is Another example we had in a data hall: we found some industrial chillers with an unsecure ZigBee mode that had been enabled by the vendor who installed them. They did that for their own convenience—so they could connect and maintain those systems from the parking lot without going through physical security.

But that highlights the problem. If it’s a dual-homed system—wired and also running an unsecured wireless mode—now it’s wide open to exploitation.

You shouldn’t have that. And no one has visibility until they bring a system in and realize, “Oh crap, unsecure ZigBee—why is that there?” Then you lock it down.

Brian Contos

And to lock it down often means shutting it off, hardening those systems so they’re not running those bits. But to me, this is the equivalent of someone physically going to the back door of the data center and propping open that secured door with a rock—because that’s where everybody goes out to smoke—and then they just leave it open.

Brett Walkenhorst

Yeah. Exactly right.

Brian Contos

You and I were talking a little while ago about how in a backpack you could fit dozens of these devices that could be used for spying on organizations. And a lot of them are things we all have. We all probably have a smartphone. We might have Bluetooth-enabled earbuds. But then there are Bluetooth speakers that actually have hidden video cameras, spy pens—though I don’t know why you’d need those anymore—hearing aids, lav mics with really long reach.

Long enough that if somebody got something into a conference room, someone else could be in a parking lot and still get the signal.

Brett Walkenhorst

Yeah.

Brian Contos

The one I thought was really interesting—and I didn’t know these existed prior—was this idea of security cameras that are solar-powered. They have SIM cards and store their data in the cloud. So not only are they not even plugged in for power—they could be, or they could have a backup battery—but they’re capturing audio and video, streaming over cellular. It’s not going across your network, so your data security, network security—nothing’s going to detect that.

And it’s sending it out to the cloud. You drop a bunch of these in an organization—maybe people find one or two, or ten—but maybe they don’t find the other fifteen. That’s a massive risk. I could see this affecting batch manufacturing, discrete manufacturing, power and energy, any kind of critical infrastructure—as well as executive boardrooms.

Brett Walkenhorst

Absolutely. Now, if we get away from data centers—I don’t care too much about audio and video exfiltration from a data center, because it’s so loud in there. Nobody’s talking about anything important, for the most part. Fans and noise. If they are, it’s hard to hear.

But in other environments, that kind of thing is really, really important. Like you said, there’s a whole slew of devices out there that use Wi-Fi, Bluetooth, cellular—some kind of mechanism for command and control and data exfiltration. And they’re simply intended to extract audio and video information.

Conference rooms, boardrooms—there’s a lot that goes on in those kinds of areas. If you don’t have visibility into wireless exfil, you’re at risk. Depending on your risk profile, maybe that’s something you should be concerned about.

Having the ability to bring in systems to monitor and identify if something like that is happening could be crucial.

We’ve also seen these devices on executive floors. This goes back to the data center idea a bit.

We’ve seen O.MG cables or USB Ninja. We’ve seen these cables on executive floors of large Fortune 500 companies. If you plug one in, it powers up either an access point or a Bluetooth central mode. You can pair to that device from somewhere else and take over whatever device it’s plugged into, because it has HID privileges.

So now, not only can you exfiltrate data—you can do keylogging, keystroke injection, you can take over the device. We’ve seen these on executive floors.

You could do the same kind of thing in a data hall, but with cables it’s more conspicuous. Still, they make little USB dongles that do basically the same thing.

Kind of like a bad USB attack—but now I have the flexibility to tailor and modify the attack via wireless command and control. And that kind of stuff is really easy to slip in. It’s not hard to disguise. They’re small form factors.

Brian Contos

Yeah, you mentioned those O.MG cables. There’s no visual inspection that’s going to reveal, “Oh, this is one of those cables.” They look just like a regular cable—Ethernet, phone charging, whatever.

Maybe you could weigh it, I guess? But you’re not going to go around doing that. Unless you’ve got an X-ray machine and know what you’re looking for, you’re not going to spot it.

So the best thing to do is monitor the wireless so you can see that it’s beaconing out. Otherwise, you just don’t know.

Brett Walkenhorst

Yeah.

Brian Contos

Let me ask you: when it comes to a SCIF, you’ve got these really secure locations where things can be quite binary—like, “I should see nothing, and if I see anything, that’s a problem.” People aren’t supposed to have phones, or there’s not supposed to be any Bluetooth.

But what about organizations like an executive conference room at a tech company—where everyone’s got a bag full of things they use for work and personal use? In that type of environment, what’s the best approach to look for suspicious signs or malicious indicators?

Brett Walkenhorst

It’s tricky. It’s not enough just to see that something’s there—because there’s a crap ton there. We’re looking for that needle in a needle stack, because it looks just like everything else. But there’s going to be some little behavior that’s concerning.

One thing we do at Bastille is constantly research various threats and characterize their behavior based on what our monitoring system can see—so we can fingerprint the threat, differentiate it from normal behavior, and build detectors that will automate that detection.

That can include misconfigurations. So, for example, ZigBee—if it’s in a vulnerable mode, it’s going to send certain packets. That lets us see: “Hey, dude, your network’s open to this kind of thing. You should lock that down.”

It could be poor configuration, or it could be behavior over time that starts to look suspicious. Some threats we’ll see right away—like an Evil Twin attack, which is pretty easy to detect. We have ways of spotting that quickly, even though there are lots of variations on the theme.

Other things might build over time. We can say, “Hey, something’s starting to get a little wonky. Threat level rising.” And we can point you to where it is—“It’s over here.”

Brian Contos

So you can see a pattern of life. You can go back in time and see where the device was in your space, because you’re locating everything constantly.

Brett Walkenhorst

Right. If it’s in there and showing threatening behavior now, you can take action. That might be an automated response from another system, or it might be a human response—go interdict, investigate, whatever it is.

Brian Contos

I like how you phrased that. It’s not just that the thing is there—it’s all the business and technical context around it. It’s determining, “Okay, let’s look at the supporting evidence.” One plus one might not equal two in this case.

Brett Walkenhorst

Exactly. That’s something we’re always working to improve. The analytics approach is a layer on top of detection and location. And it will continue to evolve—as threats evolve.

Brian Contos

Clearly attackers are using this. Why bang your head against the front door if you can go through an unsecured side door, right? Biggest ROI for them, for sure.

Brett Walkenhorst

Absolutely.

Brian Contos

Brett, it’s always fascinating talking with you. If people want to learn more about the research you and Bastille are doing—or just more about Bastille in general—what’s the best place to find that?

Brett Walkenhorst

Best place to start is our website: bastille.net, B-A-S-T-I-L-L-E dot net. There’s lots of content there. We’re always posting more. We’ve got a blog, we do webinars routinely. If any of that interests you, feel free to go there to start. Dig in however it makes sense. And we’re happy to interface directly if you have specific questions.

Brian Contos

Awesome. Well, Brett—as always—thank you for being part of Mitiga Mic.

Brett Walkenhorst

Thanks, Brian. Appreciate the time. Good to be here.

LAST UPDATED:

September 9, 2025

Don't miss these stories:

From Rogue OAuth App to Cloud Infrastructure Takeover

How a rogue OAuth app led to a full AWS environment takeover. And the key steps security leaders can take to prevent similar cloud breaches.

CORSLeak: Abusing IAP for Stealthy Data Exfiltration

When people talk about “highly restricted” cloud environments, they usually mean environments with no public IPs, no outbound internet, and strict VPC Service Controls locking everything down.

Defending SaaS & Cloud Workflows: Supply Chain Security Insights with Idan Cohen

From GitHub Actions to SaaS platforms, supply chain threats are growing. Hear Mitiga’s Idan Cohen and Field CISO Brian Contos explore real-world compromises, detection tips, and strategies to strengthen your cloud security.

Inside Mitiga’s Forensic Data Lake: Built for Real-World Cloud Investigations

Most security tools weren’t designed for the scale or complexity of cloud investigations. Mitiga’s Forensic Data Lake was.

Measurements That Matter: What 80% MITRE Cloud ATT&CK Coverage Looks Like

Security vendors often promote “100% MITRE ATT&CK coverage.” The reality is most of those claims reflect endpoint-centric testing, not the attack surfaces organizations rely on most today: Cloud, SaaS, AI, and Identity.

How Threat Actors Used Salesforce Data Loader for Covert API Exfiltration

In recent weeks, a sophisticated threat group has targeted companies using Salesforce’s SaaS platform with a campaign focused on abusing legitimate tools for illicit data theft. Mitiga’s Threat Hunting & Incident Response team, part of Mitiga Labs, investigated one such case and discovered that a compromised Salesforce account was used in conjunction with a “Salesforce Data Loader” application, a legitimate bulk data tool, to facilitate large-scale data exfiltration of sensitive customer data.