Just How Easy is it to be Hacked?

Earlier this year, David Gilbertson published an excellent article on Hacker Noon with the eye-catching title “I’m harvesting credit card numbers and passwords from your site. Here’s how.” While the article is noted as a fictional endeavor, the approach that is detailed is certainly something that could have a negative impact on corporations around the world.

A Relatively Simple Approach

In the author’s pseudo-confession, he states “I’ve been stealing usernames, passwords and credit card numbers from your sites for the past few years.”

Gilbertson went on to detail how his code looked at form fields with standardized names (password, card number, CVC, etc.) and relied upon built-in events to execute the following tasks:

  • Digest data from all form fields (document.forms.forEach(…)) on the page.
  • Grabs document.cookie from the session.
  • Transforms the information into a random-looking string const payload = btoa(JSON.stringify(sensitiveUserData)).
  • Sends the data to a domain he manages, between the hours of 7 pm and 7 am (local browser time).

How Does His code Work?

At the heart of the attack, is the uber-popular implementation of npm. For those who are not aware, npm is a JavaScript package manager and at the core of products like grunt, bower, gulp, browserify, and cordova (just to name a few).

“Lucky for me, we live in an age where people install npm packages like they’re popping pain killers.” – David Gilbertson

In David’s (fictional) approach, he decided to leverage npm as his distribution method. Creating a library that allows console log messages to be displayed in color, he figured other libraries would welcome his functionality. After all, seeing colors on the console is pretty cool, right?

Next, he used several accounts to submit pull requests (PRs) to various front-end packages and dependencies. The PRs focused on making a much-needed fix, then added his logging functionality as well. Developers going the extra mile is always a welcome thing, especially in open-source projects.

The PRs would be approved and the fix would be deployed, as well as his way-cool colorized logging package. The only issue is that his way cool package also included the functionality to perform the tasks noted above.

Once his PRs were approved, he started to see ~120,000 downloads a month – with none of the customers expecting any issues. As a result, he began receiving torrents of usernames, passwords, and credit-card information. Some of those arrived from Amazon Alexa-top-1000 sites as well.

Are We Doomed?

While Gilbertson states his endeavor was purely fictional, the simple nature and potential for wide-spread deployment is very concerning. The biggest challenge is getting development teams away from simply accepting packages that are available via services like npm.

In a perfect world, every package (and all associated dependencies) would be analyzed, reviewed, and scanned at the version level, prior to being implemented. In the real world, this feat is daunting at best and unlikely to be utilized. As a second layer, the QA effort (in a perfect world) would be mindful of monitoring network activity when updates are being validated. However, the real world typically yields a laser-focused eye on the item being addressed with the fixes/enhancements, often leaving broader topics (like network activity) unaddressed.

Agile teams utilizing Scrum typically allocate 20% of a given Sprint cycle for technical debt. Perhaps, some of this time can be spent gaining an understanding of the packages and dependencies that are being introduced from the prior Sprint – if not the current Sprint.

Testing the Waters

Testing the Waters

Does testing really pay off in the World of software development?

As software developers, we are constantly hounded about the importance of automated testing. We hear diatribes about test driven development, evangelist who swear by unit testing, and coders who claim testing has changed how they work. By now most people in the industry would agree there is a high level of credence to the importance for this type of testing. What I find surprising, despite the apparent benefits, is that most companies don’t do it; and in many cases it is the same who swear by it who aren’t doing it.

So why aren’t more companies utilizing automated testing in their software development? For starters, it takes more time. Of course, it’s easier to write code without companion unit tests for every method created. Writing tests often means writing as much or perhaps more lines of code than the actual solution. With deadlines looming, it is hard to justify this added effort and our tendency is to just get the application working.

Creating unit tests is hard!

Many times, it is challenge enough to just come up with the logic necessary to build an application. Couple that with factoring your code so that it can be easily tested makes the coding effort doubly as hard. One must consider potentially thorny concepts such as interfaces, dependency injection, and mocking. Questions arise such as what to test, how much coverage is preferred, or whether we care to distinguish between pure unit tests or integration tests.

Testing can also be distracting; it’s easy to lose sight of the true goal when you are worried about constructing the perfect test framework. No amount of testing, however, can overcome a solution that fails to meet its original requirements. While on the topic of requirements, with the popularity of agile methodologies, functional requirements are often more fluid. This flexibility in scope potentially sets us up for not only re-writing code, but also re-writing tests to match. Our once perfect suite of test cases can easily become stale.

Despite all the added complexities, why is automated testing now more important than ever? There are two significant reasons why automated testing has become so crucial in the software development lifecycle. First, I point to the ‘rising cost of defects’ paradigm which illustrates the exponentially increasing cost of finding a bug later in the development cycle. A bug discovered through automated testing is far less costly than the same bug discovered during acceptance testing. Later in the process; the end user must capture, document, and submit the bug. The development team must then prioritize, assign, re-analyze, and refactor. All of this is time and money that could have been saved if detected earlier through automated testing.

The second factor that has brought about increased importance on automated testing is the rise of the agile methodology. Nearly everyone it seems is using some form of agile development practices these days. This means that deployments are happening far more frequently than would happen in the old days of ‘waterfall’. With a repetitive release cycle, it becomes paramount that testing be automated. To execute the same test repeatedly is not only tedious but also invites the possibility of excluding certain tests on future runs. Automated testing not only verifies that new code is working but helps keep us covered in terms of regression.

If you or your company are in the business of developing software, ‘test the waters’ and implement automated testing in your development operations. Despite the initial investment in skills, tooling and time, your leap of faith will become apparent as your product matures. Speaking from experience, done right, a solid framework around automated testing will pay dividends before you reach the end of your first release cycle.

If you would like more information about automated testing, or how we could help you implement testing in your development environment, contact us at CleanSlate.

Playing Find and Seek on a New Project

Playing Find and Seek on a New Project

Have you ever found yourself in a position on a project where you struggle to obtain answers to your questions?  For me, I encountered this situation while being part of a team converting an application built in a proprietary language to a solution based on standardized languages and frameworks.

On this project, the stories contained references to program code in the current version of the application. This caused our team to play a game I have referred to as “find and seek” in order to derive the underlying business rules and functionality.

The Building Contractor Example

Reviewing a significant amount of proprietary code eventually leads to uncertainty with the results provided. While doing my best to analyze and understand program logic written in an application that I am far from an expert in, there are times when I would need to reach out to the subject matter expert (SME) on the project. Upon asking my task-related questions, most of the answers referred to revisiting the current code to gain an answer.

While I certainly understand that everyone, including the SME, is very busy with tasks related to the project, the following example came to mind:

Assume that my job is to frame houses for a living. I have a strong understanding of the framing process and have experience with the tasks assigned to me to yield a fully framed home. While on a project, I reached a point where I needed answers from the architect on the project. When I asked for a quick answer to my question, the architect instead provided me with the following answer:

“Get into your vehicle and drive about 20 miles to a house I completed prior to this project. Ask to enter into the home and find your way into the attic. Once in the attic, if you review the work I completed in the northwest side of the home, you will understand how I want this task completed.”

In essence, the architect asked me to make a 40-mile round-trip drive to (hopefully) gain access into a home which had the same requirements when built. The hope is that I can actually gain entrance into the home to be able to navigate inside the attic to obtain an answer to a question that could have been provided by the architect within a much shorter period of time.

The Cost of Find and Seek

While the building contractor example may sound absurd, asking a team member to spend extra time to find an answer is not much different. When these types of situations continue to occur, it is easy to understand how a project can stall or go over budget as team members continue to derive requirements and functionality from the version of the application that is being replaced.

In addition to the loss in time, there is the aspect of the time required to regain productivity after an interruption in planned work. The following graphic appears in Arshad Hossain’s blog entry “A study on unplanned interruptions in software development:”

With a goal of obtaining 70% productivity, the time period between data points I3 and I4 demonstrate the loss of productivity due to an interruption. Based on the graphic alone, it appears there is a 45-minute period of time where productivity bottoms out at 10%. In this example, the author references helping another individual during this time, but the segment could also reference the topic discussed in my example.

The productivity does not reach zero, as some level of productivity is gained from the find and seek effort. Eventually, productivity reaches 80% before a break is taken.

By comparison, the I1 data point could represent the impact on productivity if the SME was to take a few minutes to provide an answer. Here, there is a small drop during the conversation, but productivity increases rapidly as a result of the quick assistance provided.

Impact on the SME

Looking at the graphic from Arshad’s article could raise the question, “do the I3 and I4 data points represent the productivity loss by the SME by fielding questions about the application being converted?”

Considering the role of the SME, I don’t believe this is a valid consideration. While the SME’s productivity will be impacted from answering the question, it is part of the SME’s job to provide expert information regarding the current application. In fact, Arshad Hossain concludes, “On average, about 40% of working time is lost because of interruption, but not all of it should be counted as waste as some of it is unavoidable and some of it can actually increase other people’s productivity.”

Conclusion

I truly understand the task load on team members during application conversion projects. During these types of projects, the product owner, development lead, and other key roles are often overloaded with tasks because they’re focused on delivering a quality product at an accelerated pace. However, it is important to keep in mind the impact that asking team members to play “find and seek” has on not only their productivity, but also the cost of the project.

If your business is looking for help with software asset management, building an app, cloud consulting, or any other IT issue, our team at CleanSlate can help. Contact us today.