Tuesday, June 6, 2017

How do you fix mobile banking in Canada? Part 2

This article has been a rollercoaster to write.  

I wrote it and then a series of security events happened which delayed things whilst that got cleaned up.  Then I came back to finish off the article incorporating what happened after originally writing it.  Therefore, I apologize if this is a bit more disjointed than my normal style. 

Cheers

Jase

— 


A quick recap of the angle where I’m coming from…  Whilst everyone thinks mobile banking and digital banking in Canada is fine and dandy, safe and secure, I personally believe it's a bit of a nightmare with lipstick applied to it.  Banks leak on a daily basis and people appear to do things that leave me scratching my head frequently, so in part 1 of this series I took a look at the ludicrous design all Canadian banks have for shipping apps with URL end points to their backend on full display, as well as dodgy things like IP addresses that resolve to India and clearly unauthorized code that slips through…

After the success of the first article, I wanted to cover a different aspect of digital banking in Canada.  I want to cover the subject of people and security training, and how this affects digital security with mobile banking in my view.  Naturally, banks will tell you everyone are trained to a high degree in being a safe programmer, etc, and you need not worry about this - but I beg to differ…. In the process of researching this article, found a multi-bank breach backing up precisely my point I was about to write about.

After the previous “part 1” article on mobile banking, strangers have sent me copies of emails and correspondences that their various Canadian banks had sent them to do with their security concerns.  The responses all had the following traits in common:

  1. The responses were all styled like canned responses that started with something along the lines of “At blah blah bank, we take customer security very seriously“, and then they would just deflect the customer to a standard web page with a security guarantee whilst reiterating to the customer what *their* responsibility is to the bank.  All of this didn’t address whatever the customer was originally asking about, and so basically, they were all blowing smoke up the customer’s backside and fobbing them off.
  2. The responses always came from a non-technical Customer Service rep, and never from someone that actually understood what was being asked by the customer.  This is synonymous with getting mortgage advice from the bank electricians, and from a cybersecurity standpoint this has the same effect.
  3. There was never any indication that the concern being raised was going to reach the people capable of fixing the problem.  This is something that I know very well from my own experiences over the years.  Things appear to go into black holes, and you never hear about them again.

This is all stuff I’ve seen a lot of personally, so where does this strange attitude to customer security come from?

At the time of writing this, 9 out of 25 (~40%) of tested banks designated as “Schedule 1” by the Canadian Bankers Association in Canada, have a standard phishing problem caused by incorrectly configured security on their websites.  The general rule (with the exception of BMO and TD) is that if the average person has heard of that bank, it’s got a phishing problem.  Banks that the average person wouldn’t know (ZagBank, VersaBank, B2B Bank, Citizens Bank of Canada, etc) don’t suffer this problem, despite the fact that the major browsers had a solution to this about 8 years ago. Many of these banks were specifically notified by me that there is a problem, but it was never fixed. The ratio of affected banks drops off somewhat when you look at the Schedule 2 banks.  Things get even more secure in Schedule 3.  

There’s a clear trend.  The major question is where does that trend originate?

You could joke that the more Canadian a bank is, the more likely it is to be open to phishing.  However, if we leave banking for a second, we can use another “truly Canadian” behemoth to get a different angle on things.

(Click image for bigger version)

As you can see in the job postings above from Bell Canada, there are differences in how long they think they’ve been around (likely the result of sloppy copy/paste errors), but there’s no doubt about their “Canadian-ness” that they’re trying to put front and centre.  What is important here is that Bell Canada is not a bank, but suffers many of the same symptoms as the major Canadian banks.

The first problem is simply an attitude problem.  If you question Bell Canada and their security, you’ll get the same type of canned response about how “security of our customers is paramount”, or some equally well-worn cliché.  There’s a cultural wall around the company where any challenge to their security is met with a standard response of denial. There's apparently no security problem.

Second, if you look at the company technically, you’ll see the same technical hallmarks you see in a typical big bank; There’s insecure mobile apps being pushed to the public, the website where you manage your account security has been open to phishing for years, and customer data moves around on non-https connections.  Other than being Canadian and just as "leaky" as a standard bank, what else does an entity like Bell Canada share with 40% of the Schedule 1 banks in Canada?

Obviously, size is a factor.  Also, financial constraints around certain resources is a factor.  In both banks and a large non-Bank organisations like Bell Canada, customer service is an expense of doing business rather than being treated as an investment to make things better.  

In the wake of the latest breach out of Bell Canada where 1.9million accounts were compromised, you have to ask whether the concomitant fallout and class-action suits that can arise from this could have been avoided if the customer service desk actually connected concerned customers to internal security people, rather that just declaring “We’re Bell Canada, ergo we’re safe” and carrying on in perpetual denial.  

Another problem is a sheep mentality.  Large corporations like Bell Canada just follow what the banks do.  Thus, if the bar is initially set low by the banks, a corporate entity like Bell Canada is not likely to go much above and beyond what the banks do, right?

The biggest problem, however, is training people.  

If you’ve never worked in a bank, you basically go through a few weeks onboarding process when you start.  The process involves going through some basic common sense training, like how to identify when you’re being bribed, how to identify and report when something looks dodgy, and so forth.  In Ontario, this normally comes with an additional course on dealing with people with disabilities, but again, it’s all stuff that should be common sense.  

If you were to believe the tone of this training material, you may be fooled into thinking that banks are really safe from a digital standpoint and that your coworkers are safe, too.  It's a tone that implies you're working in a kind of digital fortress, if you will.   However, there appears to be no insider threat programs, no “how to be a safe programmer” and so on, so things can unravel pretty quickly from the inside out.  

I finally expired on my Militarily Critical Technical Data Agreement in Feb 2017, after half a decade. It’s really obvious to me, because I’ve gone through the NISPOM, that you don’t move code from one computer to another using USB sticks, because you may forget to properly sanitize the stick after, leading to the potential for a leak if you leave the office at the end of the day with that stick in your pocket.  Not to mention that if you’ve previously stuck that USB stick in an another external computer at home or a library, you’re at risk of introducing malware inside the network.  But I’ve seen that happen in banks on a daily basis.  

I'm really not making that up.

My gut feeling is that banks could learn a lot from the spirit of the NISPOM, especially when it comes down to safeguarding confidential information.  Unfortunately, I don’t think many people working in bank IT have ever heard of it, let alone looked at it.

Here’s a specific example of what I’m talking about.  

In Canada, we see a lot of Indian consulting companies in the banks (the idea being that it’s cheaper), and this by it’s very nature means that hiring these Indian-headquartered companies must come with the additional security problem that they introduce, because they must exfiltrate a lot of confidential documentation about the inner workings of the banks for collaboration with their other regional offices, or for approval by superior managers who are not in Canada or even North America. This problem should be common sense, but the banks still do it anyway.  “Ours is not to reason why” and all that jazz, right?

The net result of this forced exfiltration, naturally, is their version of the aforementioned USB stick problem is orders of magnitude worse, because you have not just bad code and bad document handling skills resulting in stuff being shared with all and sundry outside the bank, but the protectionist cultures of some of these consultancies always demonstrated to me in the past that they’re more interested in the politics of progressing their consultancies ever deeper into the banks (and covering their backsides along the way when shit goes wrong) and so doing the right thing for the bank (who is their customer) comes second place to that agenda. 

As an aside, I've fallen foul of that personally, because if I see something is clearly wrong, I'll do something about it so it can be fixed. One well-known large IT consultancy I once sub-contracted for was not pleased that as an IT person I fixed the customer IT problem. That bank and I still have an excellent relationship to this day, because I don't do politics, whilst that consultancy has had it's numbers drastically reduced.  

So, what’s the worst possible result of all this politics, bad code and bad data handling?  In short, people do some really, really, dumb things with confidential documents.  

You can’t sugar coat this. It’s stupidity on a massive scale.   This is where I believe the crux of the problem is.  

Smaller banks and smaller organisations hire local people and so the data naturally doesn’t have to travel externally to India.  Big banks hire foreign IT consultancies and so confidential data is routinely exfiltrated and is frequently on the move, and often passes between people with inadequate training and no common sense.  No amount of IT protection against external threats is going to solve a problem that originates internally.  Again, that should be logical common sense, but if you look at the status quo, the evidence says this is frequently not how it’s being treated. 

Like, I’ve seen some really dumb stuff.

Now, this is the part of the article where my natural assumption was that the Internet is just awash with stuff leaking out of Indian consultancies working in Canadian banks, so I was just going to quickly find a document online that obviously shouldn’t be there, and make a “See! This is what I’m talking about” example of it.  

The original plan was this should only take about 5 minutes as one of my banks has a "confidential" API platform review underway inside the bank that is public knowledge to everyone outside the bank.  All I needed to do was point to a document with meeting or phone call notes about it and things would get cleared up.

As I said, it should be a quick job.

The reality was that I found the worst example I've seen to date, of precisely what I’m talking about.  Someone in Kolkata, India, working for Tata Consultancy Service (TCS) was leaking IT documents from one of my banks.  

And another of the big six Canadian banks...

And a two well known American financial organisations… 

There was also a multi-national Japanese bank. 

And a multi-Billion dollar software company. 

And.... well, you get the idea.  

It was meeting notes. Invoice templates. Network diagrams. Platform architecture plans. 


My original plan of spending a few minutes to go and find a TCS leak or Tech Mahindra leak pointing at one of my banks quickly morphed into a massive operation of trying to work out what to do with a multi-national confidentially breach across multiple banks and financial institutions, originating from one guy in India.…   

You know how I found this?  I’ve seen clueless people use a free (and therefore wide open, and not private in any way) online repository like GitHub before for confidential stuff.  But here was a TCS manager in India, using a free GitHub repo to manage multiple banks and financial institutions around the world, with all their documents on their various projects on full display to the world. Even Google had indexed them.  Every migration plan, every estimate, every powerpoint telling customers how TCS were going to fix or upgrade their system.  Obviously, in a multi-bank breach like this, the first bank to pick up the torch and run this issue was going to have a high technical overview of what everyone else was doing.

As I said before, people sometimes do some really, really, dumb things.  

This was a new level of monumental head scratching activity, as you could literally fork or clone an entire repository of containing architecture details and roadmaps for some of the largest financial institutions in North America. 

Here’s a list of recently checked in documents, for example. 

(Click image for full-size)

Now, obviously, I approached my bank first, but they confirmed that they still have a policy of not paying out for cybersecurity info, and given I have a policy of not working for free for the banks either, I simply moved on, and the breach went south. 

Literally.

As is often the stark contrast between Canadian FI’s and American FI’s, a well-known US financial organisation was more than happy to engage with me. In fact, it was their President & CEO that first made contact, offering me his email address.  

That never happens in Canada. 

Next, a Senior Vice President went over my backstory, the evidence I was presenting, and confirmed the problem symptom and source of the problem.  We had a quick bit of back and forth and then he dealt directly with TCS on the matter. It was actually a joy to deal with these people - there was no messing around, and things were always clear.

This morning, I checked to see if TCS had acted as a result, and sure enough the public Indian GitHub repository is deleted, along with all the various bank documents.  Looking at the LinkedIn page of the leaker, it appears that TCS has not fired that individual yet for being such a monumental tool.

Conclusion

A common reaction from Canadian banks when I ask (or talk) about a particular security problem, is they immediately think that I must have accessed something internal inside their bank, and that I must have crossed/breached some external perimeter they have to get to something inside.  

This is understandable.  

However, my never-ending mantra is that Canadian banks are naturally just leaky - so you don’t need to go into a bank to find security problems. Some banks are more leaky than others, and if you know what their normal causes of the leaks are, you can start guessing where the “puddles” of data or information will appear. Today, I highlighted an age old problem I've seen for years in Toronto where the consultancies working on the bank's IT systems are a cause of some of the breached information.

In part 1, I highlighted the silly laziness problem where programmers didn't even try to hide URL endpoint strings, allowing script kiddies and amateur hackers to quickly work out the back-end endpoints at many banks.  In part 2, I've now highlighted another big problem that originates inside the banks - training people to not do stupid stuff.  

Training people and having proper protocols and policies in place is key to securing the banks.  I personally believe banks should take a look at the NISPOM. Look at the spirit of what it’s trying to achieve, and think about how that applies to a bank.
  1. Banks also need to stop with the chorus of denial.  There should be meaningful collaboration programs with the public.  Or do something like ABN AMRO did and join HackerOne to accelerate the process of securing things.
  2. Banks need to train people to not do stupid things like uploading confidential information to public GitHub repos, pastebin, and so on.

Now, you'd think I wouldn't have to point this stuff out in 2017, but obviously I do need to point it out because this is a problem, QED.

So, to recap what needs fixing so far in this series of articles.
Part 1 - Don't be lazy with mobile app security, and check the code being pushed into production for unauthorized additions.
Part 2 - Stop people doing dumb things like posting confidential documents in public by training them with proper rules and protocols.

Next article (part 3) will come in a few weeks.