Wednesday, August 19, 2015

Telco security in Canada

This morning, I was checking up on some old security holes, just to see if any had been patched or not.  I looked at Rogers and I looked at Bell Canada.  Usually, not much changes, but this morning I noted that both had tackled some very very low hanging security fruit.

It may be coincidence that they both did this around the same time.  It may also be a case of one noticed that the other had "upped their game" and followed suit.  Either way, both of these flaws were schoolboy errors that allowed someone to start poking around in areas where the general public shouldn't be.

In the case of Rogers, you could access network infrastructure information that wasn't for public consumption.  In the case of Bell Canada, you could see what their PR machine was going to announce before they announced it.  After a little more poking around, I discovered Rogers still had a hole, so I've reached out for the them to contact me and I'll re-explain it to them.

Now, here's the kicker:  Both holes are visible in Google's caching mechanism, meaning you don't need to even touch or access their networks.  You just wait for Google's bots to crawl through the holes and report to Google what it found, then you just go and read it on Google where it's all spat out for all and sundry to see (if you know what to look for).

As you can see, this is schoolboy level stuff.  Luckily, in either case, it wasn't spewing out customer data - but each flaw does point to a bigger problem with security...

In Rogers case, it's usually that unless something involves billing or customer facing sales, websites and portals become neglected and forgotten - at which point security moves on and certain Rogers sites become relics of the past which are prime targets for problems like this.

In Bell Canada's case, their security problem is that they parade around saying how safe they are, pointing at firewalls and technological expenditures and monitoring equipment, then undermine all this with policy.  As I've said before, Bell can add as many guns as it thinks it needs to the decks of its battleship, but it doesn't make the blindest bit of difference to fixing the existing hole below the waterline.

Further, their myopic legal stance is basically paraphrased that if you report to them something you shouldn't know, they'll respond like you have perpetrated the crime.  This is analogous to you are on one side of the street and notice a burglar breaking into a house on the other side of the street and running off with the TV,  then you get arrested when you tell people you know the TV is missing.

This is what led to the rather silly situation with Bell Canada where nobody tells them what the specifics of their problems are, which reinforces Bell's notion of "Absence of evidence is evidence of absence" where security holes are concerned.  Meanwhile, crooks can run with compromised customer accounts that can't be reported.

So, in conclusion, whilst it's nice to see small incremental changes in security are happening at Canada's two major telco's, there's still bigger issues at play.

Tuesday, July 21, 2015

The Hymn Of Axciom

Today, I was looking again at another runaway data leak, which I am pretty sure originates within Bell Canada, but in order to know that for certain,  I have been tracing the source of the data through many different links in the chain through the usual methods of poking privacy departments, legal teams, etc, for the better part of the last month.

Today, that chain exited Canada as the last privacy team handed over the name of the next team I would be talking to. This turned out into Axciom, in North Little Rock, Arkansas.

If you've not heard of this company, you should go and research them.  This is a large data company who's practices get questioned a lot by privacy advocates.  So much so, there is even a song about them called "The Hymn of Axciom"...

Check it out.

Tuesday, June 30, 2015

IVR Security Hole Redux with CIBC

In January, I was dealing with the bank CIBC over an issue with their IVR system.

To recap, there was a problem with their automated system, where it would call and ask the person who picked up my phone to "press 1" if they were me.  If someone presses 1, the bank would then fail to verify if I'm actually me, or the cleaner, the child-care assistant or the robber that just stole my phone and would rattle off what I consider to be personal information.

Obviously, I wasn't happy about that security hole as I had reported it in early-2014.

Eventually, a dialogue opened up with the bank (after I complained the hole had been there for 9 months since I first reported it) in January 2015, and eventually the solution was rather than fix the issue for everyone the bank would unsubscribe me from it.  This meant I was at least protected, even if nobody else was.

Six months have now passed and CIBC has reversed this security fix and the system called me again today.


Saturday, June 13, 2015

Programmers and Interruptions

As a programmer, the single biggest problem I face is the interruption.  Whilst I can joke and empathise with other programmers about this, it's something that non-programmers don't understand. There's a well-worn cartoon in programming circles that I will share here, just incase you haven't seen it.

The point of the cartoon isn't lost on anyone, but what is lost on non-programmers is the time span that this may have taken place over.  For an average programmer on an average job, you're probably looking at 10-20 minutes.  For a programmer working on a really complicated system, this can be several hours.  Additionally, the mental workload involved can be draining enough that if this happens, say, three times in a day, the entire day can be written off. 

But what's going on here?

The crux of the problem starts with the differences between knowledge workers and non-knowledge workers.  Most people regardless of status in the workplace will work on a schedule where you focus on some thing, get interrupted, switch focus until interrupted again.  It's all about prioritising what is most important and getting these things off your plate as quick as possible.  You hear managers complain that they never get anything done because their day is full of interruptions, but really that is their schedule - it's just a series of interruptions and the only thing they have to worry about is what interruption is coming next.

This means when a manager/house-buddy/spouse walks up to a programmer and asks them "a quick question", they interpret the amount of time being expended by the interrupted is the same as the amount of time spent by the interrupter.  However, this is not the case...

First, let's introduce the programmers mental stack.  It's analogous to this receipt spike:

With a receipt spike, the first item on the spike is the last one to come off.  This spike is known in computer terms as a "stack", but it's the same thing.  For a programmer, the first item (so at the bottom) on the stack is what we are trying to accomplish (fixing a bug, writing a feature, etc).  Then we push onto the stack the next pieces of the puzzle.  

A real-life example of what I am talking about...  This is one I had today (remember, it's in reverse order, so start at the bottom) it's about 45 minutes :
  • Trace in my head the flow of the header packets that precede the payload packets to make sure this is the best way of doing things.
  • Double-check logic for endian-ness incompatibility.
  • Packets in wrong order still, so add stuff for that.
  • Rejig classes X1, Y1 and add new functionality to classes X2 and Y2 all in my head.
  • Build packet model in my head.
  • Refresh my head on how the packets are structured. 
  • Looks like a packet issue - maybe if I remodel how I'm dealing with the packets... 
  • Let's trace the code in scenario B and compare to what we just learned in scenario A.
  • Let's trace the code in scenario A to see precisely what's going on.
  • It fails in scenario B - must fix scenario B. 
  • It works in scenario A 
  • The device does not behave how I expected it to.
  • There's bug in my iOS Bluetooth code - must fix it.

Obviously, this is all done mentally in my head, and I'm tracking what changes I haven't yet done as well as concentrating on the actual task I'm working on at this precise moment, whilst making sure to balance the stack in my head so when I finish each task, I can go back to the task I was working on before.  When doing this, I'm not typing a single line of code.  I'm staring at a picture on the wall - probably not even taking in what my eyes are seeing - and my ears hear stuff, but I'm not actually listening.  I've likely got headphones on, just to drown out the things like talking that are happening around me, so I don't lose focus.

This is the moment when people say "Oh, as you're not busy right now, can I just ask a quick question" and then wonder why I'm annoyed.  As the person is still talking to me, my head is scrambling to work out how I'm going to recover from this as I still have about 3/4 of the stack in my head.  As I'm being asked if this was a bad time, my anxiety level is skyrocketing as I try to cling on to what I learned in tracing scenario A in that stack above... Was it header packet 0x02 that had the payload size 0x0400?  Must remember 400... no, it's not 400 as the bytes need to be swapped so 0400 is really 0004...   At this point, I must answer the question I'm being asked - so blurt it out - 4... must remember 4... what packet was that on again? 2? Was that in scenario A? Or B?

Now panic sets in...  If I can't remember whether the payload size 400... no... 4... is the right one or the wrong one, I need to go back and look at that again, which means we're now into another 30 minutes of building up the model in my head again.  Having answered the question, I'm staring at my screen and quickly trying to refresh my head.  This is when you hear the dreaded words "Before you get too engrossed again, can you just...."


Now, I can get back to work in a minute or two if I'm doing something trivial.  However, if I'm working on a really complicated issue (such as the above Bluetooth stack), then it's not uncommon for me to take 30 minutes or longer to get things straight from a cold start.  Really big issues can take 90 minutes to get my mental cache loaded up with the model of what I'm doing and how I'm doing it, plus all the states memorised as to what it's doing right now. 

And I'm not alone.... There's a study (see here) that was done over 10,000 programming sessions to see what the edit lag (how long it takes between interruption and starting to type again) that looks like this...

As you can see it takes a while to get back into things.

There is a retort from some managers (or other house guests if working from home) that we need to "learn to handle interruptions better".  This is just throwing fuel onto the fire as a knowledge worker is busy using concentration and focus to perform a task and interruptions cause a break in this focus, making the comment analogous to tripping up a footballer and telling them that they need to handle unplanned obstacles better.

One last thing I'll leave you with:  Never interrupt a programmer chatting to a rubber duck to ask them what they're doing.  

Thursday, June 11, 2015

A simple RAT C2 experiment...

This morning I was reading a security article in this tweet...

As a quick test for my own sanity, I logged into one of my banks, located somewhere that I could put some freeform text between two delimiters and plant a temporary dummy RAT payload, just to see if the bank filters out the really obvious stuff (like a plain-text IP address, for instance).  

It turns out this bank doesn't.  

Further, I realised this bank also give me the ability to select from a list of banks, so you can further split things up into command (between the hyphens) and payload (between the "DELIM" text).   

Whilst an encoded command would be more difficult to detect (see above article), you'd think they'd regex out IP addresses or anything beginning with "http"...  This is a bank after all and freeform text is bad design in this security context.

To recap what happened here: I just discovered you can leverage millions of dollars of infrastructure behind state-of-the-art security technology, to plant freeform text that can be used to morph a bank into a C2 server. 

Now just let that sink in for a moment.  Yikes!

Tuesday, May 26, 2015

Solving Visual Studio DEP0700 Error Under Parallels

Today I ran into a problem where I was using Windows 8.1 under Parallels for Mac, and I created a brand new Visual Studio project.  Just to make sure everything worked fine, I hit "Run" and went head-first into an error DEP0700.

This is a frustrating error as it's (by design) trying to stop you from running a project who's output is on a network share when in "Local Machine" mode, but of course, it has no idea that this network share is actually on your local machine in the first place, as a result of running in Parallels.

The simple solution is therefore to tell Visual Studio to run the solution on a remote machine, then set the name of the remote machine to "localhost". 

Easy fix to an annoying problem. 

Sunday, May 24, 2015

The Price Of Rushed Software For Apple Watch...

See bottom of article for June 25, 2015 Update

The Apple watch has caused a flurry of apps to be quickly updated to support it, sometimes successfully, and other times it would appear that it was done as part of a "look at us too" campaign by some companies shoe-horning their app onto the device.  With the rush to get software out, I am seeing some mistakes slipping through the QA systems of certain apps.  

Recently, I was contacted by one of my banks about a customer service issue, and as I sometimes do with that bank when they reach out to me, I pulled a single item off my pile of security issues and threw them a bonus bone to chew on.  (If they go to the trouble of reaching out to me, the least I can do is give them something to make themselves safer).

I raised this issue with the bank, who accidentally left "Backdoor" (their term, not mine) URL's for the iPad and iPhone in the localization strings under the Watchkit Extension for their app. 

Personally, I don't run this bank's app, but given my history in Canadian mobile banking (I've still got code in the Canadian retail banking pipeline that won't see light of day until 2016 or 2017 in everything from location services, to photo cheque deposit anti-fraud), I do keep an eye on this bank's mobile software to remain aware of the industry. 

Just to clarify for the legally minded, this isn't hacking/reverse-engineering/etc ... Hacking means breaking in or causing software to be coerced into doing something unintended, and reverse-engineering means taking software and getting back to the source to work out how something is done.  In this case, you don't need to even open/run the app, or perform any reverse-engineering...  As this information is left open, you just download the app from the Apple App Store, then open it up in Finder on a Mac and start reading what was left unprotected.  This is no more "hacking or reverse engineering" than reading the pages in a book is "reverse engineering that book".

The bank in question here can easily say "this posed no threat to our operations or customers" and they'd be totally correct... in the same way that leaving a million dollars on the sidewalk or having the entire CxO team dance naked down Bay Street would have little impact on customer privacy or be detrimental to operations - however it's still not something anyone should do willingly.  

What it does point to is a botched rollout where technical mistakes were pushed through to meet management deadlines.  Here's three reasons that led me to this conclusion.

First, URL's, in a bank app shouldn't be public.  As a bank, they're just asking for trouble when script-kiddies find this kind of stuff.   The URL's should be inside the app itself, where no class dumping tools can get to the strings (because they'd be encrypted, but whoever approved this from a security standpoint didn't understand that point when it was OK'd).

Second, from an architectural standpoint, this makes no sense as the iPad and iPhone URL's don't belong in the WatchKit extension (the iPhone/iPad does the internet stuff, not the Watch). At first glance, this looks like a schoolboy error in cut & paste coding.

Third, from a technical standpoint, this is sheer lunacy;  URL's don't belong in the localization strings file because you plug in the parameters of the language at runtime, not hardcode the URL's to the UI localization.

What we can do, is take the information that the bank provided and show what they should have done if they were doing this properly.  They should have created a function like this for the "Registration" url, and buried that in the app itself.

-(NSString*) getLocalisedRegistrationURL {
    NSString *URL = @"/olbtxn/registration/Registration.cibc?register=initRegistration&visitorId=native&locale=%@";
    NSString * locale = [[NSLocale currentLocale] localeIdentifier];
    return [NSString stringWithFormat:@"%@%@", URL, locale];

...then repeat this process for any other URL's that must be localised.  That's what should have been done.

Now, having said all the above, we can also take a step back from the technical lunacy and architectural nonsense, and ask if this was actually done deliberately?  

In my mind, this is the more likely scenario... 

In the above links there are instances of what appears to be a possible smoking gun in the form of "visitorId=native".  This gives the impression that under the hood, the app is doing a popular trick in the form of substituting the token flag word "native" for the value returned by identifierForVendor, which allows the bank to identify your device and track you, meaning that the URL is actually very likely being processed further in the app to do substitutions on the URL string. 

This obviously begs the question as to why they didn't substitute the locale when they've had every opportunity to do so?

I think the answer to this question would be that this is a somewhat ham-fisted attempt at casting the possible ISO locale values to force them to either "en_CA" or "fr_CA" because the backend servers don't have any idea what to do with en_US/en_UK, fr_FR, etc.

As you can guess, this problem is also easily solved... if the French language is not the user's primary language, cast the result to Canadian English:

-(NSString*) getEnglishOrFrenchLanguage {
    if([[[NSLocale preferredLanguages] objectAtIndex:0] isEqualToString:@"fr"]) {
        return @"fr_CA";
    } else {
        return @"en_CA";

At this point, we've drawn two boxes around two mistakes, been given the preferred tracking mechanism and uncovered what looks like the hallmarks of a rushed rollout that at first glance looks like a schoolboy error and then later looks like back-end issues are being ham-fisted quickly to make deadlines... 

...all for the sake of seven lines of code.

Update - Jun 25, 2015

So the bank said that they would update the app to remove the backdoor URLs.  This week, I noticed that they have updated the app.  I took a look to see if they had removed them as they'd said they would.

This is what I found:

Yes, the word "Backdoor" has been replaced with "BD"...  

*face palm*