Wednesday, December 21, 2016

ScotiaBank's Cybersecurity Problem - A video

As everyone knows, there's a few things about bank security in Canada that gets my goat.  I've documented a few in a little video, and explained why it's a problem.

Here's that video:



What I hope to achieve from this is that it kickstarts a proper dialog on what's continually going wrong.  Traditionally, this bank and I don't have proper communication - just the normal platitudes and rhetoric about security being paramount, etc, - so hopefully someone will see this and start taking seriously the points that I raise with the bank, instead of it going into the perpetual customer service blackhole.


Friday, December 9, 2016

Thoughts on Canada's Banks and Cloud Based HCE

Update - This turned out to be busted, but not for the reasons I thought it would be.  See here

This week, one of the big five banks in Canada rolled out an update to support cloud-based HCE (Host Card Emulation).  Specifically, it was the Rambus “Bell ID” system - which they call “Secure Element In The Cloud” or “SEITC” - though everyone else has known this for years as plain old “cloud based HCE”.

Whilst it’s always interesting to see technological changes, it’s equally important to think about the ramifications of such changes.  

Just rewinding for a second for some quick history, first we had “Google Wallet” V1.0.  This tried to use a hardware device element to hold encrypted data, but network operators had started their own ISIS system (used to be at www.paywithisis.com ), which got renamed for obvious reasons as Softcard (which was at gosoftcard.com).  Simultaneously, smartphone manufacturers started adding their own Secure Enclaves - Apple has one called “The Secure Element” for instance.  

Google Wallet V3 is radically different.  It uses a technology called Host-based card emulation (HCE) instead, where card-emulation and the Secure Element are separated into different areas. For example, in HCE mode, when an NFC enabled Android phone is tapped against a contactless terminal, the NFC controller inside the phone redirects communication from the terminal to the host operating system. Google wallet picks up the request from the host operating system and responds to the communication with a virtual card number and uses industry standard contactless protocols to complete the transaction. This is the card-emulation part. The transaction proceeds and reaches the Google cloud servers where the virtual card number is replaced with real card data and authorized with the real Issuer. Since the real card data is securely stored in Google’s cloud servers, the cloud represents the Secure Element part. In general, this approach is considered less secure compared to the embedded SE approach.

The problem that the banks are hitting is there are many people with devices that don’t have a hardware enclave, and the banks want to been seen to be trying to accommodate those users.  In this example, they've gone the Bell ID route.

When you consider that a major part of the security is that the secret sauce is stored in a secure part of the hardware that the OS generally has no access to, the idea of lifting this up and sticking it in the cloud immediately begs the question of what happens if that back-end is then compromised?

There is less privacy with cloud based HCE. The mobile payment providers can already see who uses a certain credit card number, and then they do choose to share that data further with merchants or other companies for commercial and advertising purposes. This is something Google has already done with Google Wallet.

When you consider the pros and cons, it is hard not to feel like the banks have opted to put security in second place behind the optics of convenience for what could be inherently insecure devices anyway.


Wednesday, December 7, 2016

A whiff of SWIFT

Regular readers of this blog will know that when a bank tells me how safe I'm "supposed to be", I will largely view anything I'm told as hornswoggle.  All my adult life, I've listened to people telling me how much effort, technology and protocol is in place to protect me, yet, it can always be demonstrated that things are nowhere near as safe as people would have you believe.

Recently, I've been working on the hypothesis that Canadian banks spray source code around like some people spray air-freshener... that it's just flying about and nobody cleans up when it lands somewhere it shouldn't.  This hypothesis may initially sound absurd in the face of conventional wisdom, but then again, conventional wisdom assumes that the banks are actually safe - even though you can pick it apart and peel back the layers.

Banks obviously say that source code is kept suitably safe - after all, they have to say that to keep up confidence - but today, during my lunch break, I decided to do something different.  Very different.

Instead of looking for an accidental source code leak like I usually do, I assumed this time that I'm looking for code put somewhere by a programmer that really doesn't give a crap about what they're doing, and generally has no regard for customers or the bank.  This meant that not only did I have to look somewhere outside of the banks, but it had to be somewhere that bordered on maniacal to think that someone would even conceive of putting code there.

I found what I was looking for.  Yes, I was surprised, too.  Most surprisingly, in this source code was my first run-in with code that handles SWIFT transactions.  

You may remember news stories about how the SWIFT system was compromised earlier this year, well any code that interfaces with that system or interfacing with the data going through that system, should definitely not be laying around outside of a bank as that's just asking for trouble.... However, real life is often stranger than fiction, and that's what happened.  

This code is from one of the core financial services at the centre of one of the South American subsidiaries, which runs through all transaction types, and reading through the code we can see it's processing mortgages, Forex, SWIFT, drafts, deposits, and so on.  It also gives insights into how the overall service was built and what components it comprises of (a task for another lunch break, perhaps).

I won't say where I found this or which bank it is for, until I've worked out what to do with it.  Canadian banks don't always cooperate with me anyway, and given it's nature, I may have to report this directly to SWIFT to deal with.



Monday, December 5, 2016

The 230 day vulnerability


In April of 2016, I found myself talking to a lady at the Office of the President at Scotiabank.  I knew something that Scotiabank might want to know about with regards to a cybersecurity problem it didn’t know it had, and we were trying to explore the next steps to exchange information.  

The outcome of that call was I would send Scotiabank an email laying out some background information, and they'd pass it to the most appropriate person in the bank to get the next steps in progress.  I work in technology and I definitely don’t work for free, especially for banks, and Canadian banks generally don’t pay the public for cybersecurity advice - which traditionally means that nobody tells the banks what they need to know in the first place.  However, I sent them an email that explained that the bank had a big cybersecurity problem and I tabled a simple barter; as a bank they could make a phone call for me which I didn’t have the power to do, and in return they would get the information that they needed.  It’s a simple “You help me, and I’ll help you” arrangement and no money has to change hands.  

A day or two later, a senior cybersecurity person at Scotiabank called Rob Knoblauch took a look at my LinkedIn profile and that was the last observable action taken by ScotiaBank on the matter that I could record.  Given the choice of acting on the fact that someone is telling you you have a cybersecurity issue, or taking the other option of not acting on it, the issue disappeared into a black hole, and nobody at the bank ever contacted me again.  Exactly 120 days after that, I sent a follow up email to the Office of the President, explaining that I was sending information to the CCIRC.  No response came from that message...

So, what precisely was at stake?

The bank had been observably slipping in it's cybersecurity efforts for some time, and by April 2016 it was now showing serious signs that an internal cyber-shambles was in full effect.  Not only had the bank forgotten to protect its Android source code (meaning every time it published a new app, everyone from white-hats to criminals could see how the app works and could compromise it, patch it, repurpose and repackage it, etc), but it still allowed phishing on its Internet banking website because they’d not patched a simple click-jacking attack vector.  It was also known that cybersecurity policies either were not being followed or didn’t exist, as popular credential sharing sites still contained ScotiaBank’s domain.  

Meanwhile, in the US,  a Mobile Application Development Platform (MADP) vendor, Kony Inc, who makes the tools that ScotiaBank uses, was the subject of ire by a frustrated Scotiabank programmer who inserted a message on a test screen in the Android app with the words "Fuck kony" (sic) in it.  The programmer probably thought that nobody would ever see this unauthorized addition to the app, unaware that the release team at Scotiabank was failing to obfuscate the app properly when sending it out to customers, and also unaware that nobody appeared to test the security of the final product.  As a result of Scotiabank turning off it's code obfuscation on its Android app that same month, anyone that knew what had happened was now crawling through their mobile source code, and it was apparent that any rogue programmers within the bank inserting unauthorized changes would be able to get away with it, because nobody had caught it and now over a million Canadians were walking around with expletive laden apps on their phones.  The CCIRC were notified that the source code was available to all and sundry, but the rogue programmer problem was left in place as a warning canary, to see whether the bank would be doing proper code reviews and time how long it would take for them to catch it.  Besides, if anyone did anything worse inside the bank to the app, it would be caught outside the bank and the alarm raised.

October was National Cybersecurity Awareness Month (NCAM), and Scotiabank was as vocal as many of Canada’s big banks with its platitudes about cybersecurity and how it takes security “very seriously" and pedalled well-worn rhetoric that "security is of paramount importance".  Each time, the focus was on making sure the customer did not compromise themselves and the bank with them, meanwhile, in spectacular fashion, Scotiabank kicked off NCAM with two more mobile source code breaches in as many days, as it pushed more updates to it’s app, still with no protection on it’s source code.  

It also came to light that Scotiabank’s programmers had posted crash stacks to the public paste site pastebin.com for internal iPad kiosk projects within the bank.  During NCAM, Scotiabank had more leaks than a sanitary towel advertisement with blue water demonstrations.   This blog, which many banks read in Toronto, tipped everyone off that Scotiabank had an unauthorized code addition in it’s app on November 15th.  By November 16th, a new app was being pushed to Canadians that, whilst still exposing much of it’s source code, was at least being polite again to it’s MADP vendor.  As ever, ScotiaBank said nothing about the matter.

The exact time that the programmer slipped in the vulgarity is unknown, but it is proven to have been visible to those outside the bank for at least 230 days, during which time the bank never caught it using it’s own policies and practices.  

Whilst Canadians spent much of 2016 walking around with swearing aimed at the bank's vendor in their pockets, they were simultaneously very lucky that this same programmer had only done what he or she did, and that they had not planted a few lines of unauthorized code to exfiltrate credentials instead.  As the bank was repeatedly shipping an unauthorized change in their apps, Canada was dodging a serious chance for a very large insider-job bank heist.  


That is something definitely worth mulling over.