I recently spent time thinking about a software development project I’m managing that had me a bit puzzled. This is first time in my long experience in project management where there is absolutely no pushback from anyone in the business regarding the project. No one is questioning the reasons to do it, the scope, or the approach. The organization is clearly aligned and committed to getting the project done.
I’ve managed software projects for multiple fortune 500 companies, with 8 figure budgets and project teams of 100 plus members. There has invariably in every project been an individual or group that was either passively or actively opposed to the project. That is not the case in this project, so what is different?
I reviewed the “Top ten reasons why projects fail” lists to see if there were any clues to why this project is so different. I’d not looked at any of these lists for some time, and found it interesting how many different variations there are. The top two on my personal list are having a clear and well defined business case, and the importance of executive sponsorship for the project. My most difficult projects have been those where the business case was ill-defined and when executive sponsorship was missing. Most troublesome has been projects where the executive sponsor has changed, or when C-level executives, who are major stakeholders change. In these situations the business case is often questioned by the new leadership and executive support can disappear. When executive support wavers, passive resistance can turn active quickly.
Below is just one of the top ten lists I came across in my review, and number 1 and number 9 in this list match my personal top two.
The current project that breaks the mold with no one opposing the project has a business case and executive support, much like the majority of the other projects I’ve managed. Even with these in place, all the other projects still had a level of resistance.
So why is everyone across this business on board and actively supportive?
The conclusion I’ve reached is that it must be the “pain factor” that has everyone aligned. The system that is currently in place is very old and lacking in functionality. Business processes are very manual, error prone, and labor intensive. The pain in using the current system is widespread across the organization and constant. The entire organization is supporting the project that will ultimately make the pain go away.
My personal list of things that impact the likelihood of project success or failure now has a third major consideration, the pain factor. The higher the pain factor and the broader it is across the organization, the more likely the organization is to rally behind the project that will make it all better. Conversely, a project that is not going to impact the organization by relieving a painful situation, may not have as high a likelihood of success, even if there is a strong business case. When the pain factor is low, all the factors on the top 10 lists become more important.
Successful projects don’t just happen. They require attention to all the factors that can cause projects to fail or not deliver a quality solution to the business. Archetype SC can work with you to identify your top ten risks and the strengths of you organization that will mitigate these risks and insure success in your project delivery.
WHY PROJECTS FAIL – TOP 10 REASONS
Excellent Project Management post by Tom Tsongas. January 13, 2014.
- Lack of a Project Charter
- The Project Charter is essentially the ‘what’ portion of the criteria of the project. It dictates exactly what is being built, created or enacted and explains in high level terms the various justification and initial scope for the project.
- Lack of User Involvement
- Poorly Defined Requirements (Poor Scope Definition)
- Scope Creep
- Inadequate (or non-existent) Testing
- Lack of Resources
- Use of New or Unfamiliar Tools
- Political Infighting
- It exists in companies as well as governments. Functional managers and executives with their own vested interest in specific aspects of the business can often come to blows over new or existing projects.
- Poor Project Management
Everybody wants to be secure but not everyone is willing to make sacrifices to achieve it. The security/functionality/ease of use triangle is a simple but effective representation of the challenges faced when implementing security of any kind. When applied to IT security it acts as a sliding scale directly impacting all three of the points. IT security is not that different from other types of security like physical security, financial security, or national security but it doesn’t get the respect it deserves. The fact remains that as we make something more secure it generally becomes more difficult to use or less desirable from an ease of use perspective.
If you look at the triangle and see yourself closer to the security point of the triangle then you probably have a bulletproof email password, full disk encryption on your workstations, two factor authentication for all of your web apps, and can spot a phishing attempt from a mile away. That person would find themselves in the minority when compared to the general population. The problem is most people do not practice good habits or common sense where IT security is concerned.
Verizon’s 2015 DBIR (Data Breach Investigation Report) found that 50% of phishing emails are opened and 10% of them have the link within executed. These phishing attacks require a user to take action for the malicious activity to take place yet they are successful 10% of the time. This is just one of the many statistics that show we have a long way to go in the area of IT security. You can put up the tallest wall around something you want to protect but if someone with a key is going to let the criminal in the front door then the wall isn’t going to stop them.
Biased decisions are difficult to avoid when selecting a new employee from a large group of qualified candidates. Hiring bias begins when reviewing an applicant’s resume: for example, Ivy League schools are known to be preferred to non-Ivy colleges and universities. Furthermore a comparative baseline, typically one’s own achievements, is used when interviewing candidates. Laura Mather became aware of hiring bias several years ago after joining a risk management team. She was highly qualified, but once offered the position, she was told that the company was hesitant to hire her because she did not attend an Ivy League school. Mather later founded Unitive, a computer program that helps companies overcome hiring bias through recruiting and interviewing applicants. She hopes that her program will help bring diversity to companies by making the hiring process neutral to gender and race. I agree with Mather’s notion that diversity is critical to provoke new thoughts and ideas. We will continue to eliminate hiring bias at Archetype SC and enforce Mather’s strategies while considering new candidates to join our team.
We continue to read headlines about “big data” and the power that it can have on almost any market, but it continues to be an amorphous entity without definition or bounds. What one person may deem as big data, another may simply refer to as a large data set. Definitions are important, and in a previous post I proposed two working definitions. I used the first working definition as the basis for my article last month that focused on questions to ask before or while implementing a big data solution. As a follow up, here are a few questions to ask when taking the first steps toward using your data.
There are many companies or organizations sitting on data that is not being used, is being underused, or is being misused. You can ameliorate the deleterious effects of these conditions, or at least understand how to by asking a few questions. It is wholly plausible that you may not have answers to the questions, or may not know how to use the answers—in which case, you would be well served to partner with an expert to help you get started. These questions are similar to those I recommend asking prior to a big data implementation but do have their own spin.
What do you hope to accomplish? This may seem like a simple starting point, but many people I have spoken with don’t have a business case or even a reason in mind to look at their data. Your goal may be general, “increase sales,” or specific, “increase sales by 50% in the southeast among 18-25 year-old males.” Often the starting point will be more general than it is specific and it will be refined.
What data do you currently have/ collect and what could you collect with little to no additional effort? Take a good hard look at what you have now—and realize that many of the programs you use everyday are actually collecting data in the background. An excellent analyst can help you find sources you may be overlooking, and can help transform some non-traditional data sources into usable data.
What outputs do you need? It is important to understand what you, or the decision makers in your organization need to see to gain value from the data. While it may be possible to find insights and answer your research questions, if it is not presented in a format that is easy for people to understand, it will not be used and will serve no function.
Above all, make sure that you have a plan. Trying to implement any analytical solution without one will result in failure. If you need help, it is important to bring in a consultant or analyst early. Spending money at the start of implementation may end up saving you money in the end.
I went to a private, Christian university where vices such as swearing were certainly discouraged, if not outright banned. After registering for my first econometrics1 class and quickly skimming my reading list, I was surprised (and secretly thrilled) to see Joel Best’s Damned Lies and Statistics as required reading. I fell in love; numbers, modeling, and the power of statistics to paint a picture fascinated me. More recently, I have come to love the data that is behind decisions. Bad data, or more precisely, bad interpretations of data, leads to bad decisions. Good data, or more precisely, good interpretations of data, leads to good decisions. This post focuses on bad decisions, next month I will highlight good decisions.
New Coke
In April, Coke lovers like me celebrated (or at least commemorated) the 30 year anniversary of the infamous “New Coke” decision with Bill Cosby, among others, pitching it.2 The Real Coke, The Real Story, by Thomas Oliver, gives a great history of the brand and the decision3. Coke was losing ground, he accurately argues. The now Pepsi challenge, in which consumers were challenged to pick their preferred beverage in a blind taste test decidedly was in Pepsi’s favor. Even in Houston, a city with a 25% market share advantage to Coke, taste test data indicated a very slim preference for Coke (p. 34). Coke needed a change, or so they assumed.
To ensure the success of the new formula, extensive market testing was done. Robert Schindler notes that coke spent more than $4 million dollars “and included interviews with almost 200,000 consumers” in their research4. The result was clear—the new formula was preferred “61% to 39%.” The decision seemed clear as well—discontinue old coke in favor of new. So why did it ultimately flop, being labeled the “marketing blunder of the century”5. Data was used incorrectly.
In any use of data, it is imperative to identify extraneous or lurking variables6 and to attempt to control for them. In Coke’s attempt to deal with the issue, the company failed to quantify “the bond consumers felt with their Coca-Cola,”7 the company states in a retrospective on their website. Emotions, preferences, and tastes are notoriously difficult to quantify and tabulate, but an attempt to control for them is important.
Avoiding a New Coke Sized Mistake
Bad interpretations and skewed data are two simple reasons Twain classically grouped statistics with the two other types of lies—lies and damned lies. As you implement analytics and data driven decisions, be careful of the implications of bad data practices or misinterpretations.
First, make sure your data is clean. Are you seeing exactly the result you want? Data is easy to aggregate and cherry pick to give rosy results. Question anything that seems just a bit too good.
Second, ask the simple question “what else may have an impact on this result.” A colleague of mine taught me the rule of 5 whys; when a decision is made, especially one that may not make a lot of sense, ask “why” five times. If there are five good answers, it probably is a good decision. If not, it will become obvious. I would suggest the 5 “what elses” rule. Keep searching for the what else until you have built a really good model. Generally, I believe that the bigger the import of the outcome, the more time should be spent creating and refining the model. Your final algorithm is likely to be beautifully simple—but the process to get there likely was not.
Third, don’t be afraid to ask for help. It’s not always popular or easy, but even experienced analysts and data scientists get stuck or have their own inherent biases. It can be invaluable to ask a friend or colleague not working on the project to take a look at your numbers and conclusions. There can be great power in a “sanity check,” even if it comes from someone other than a traditional expert!
Fourth, admit when you get it wrong. The conclusion of New Coke’s story, with the re-introduction of Coke classic and phase out of New Coke is a fitting conclusion to these tips. Recognizing a mistake can be painful, but when data leads you down the wrong path, turn around as quickly as possible. Evaluate and re-evaluate the data that lead you to make your decision, and rectify errors. Coke did this, and their “blunder” turned into an amazing opportunity and the company exited the situation stronger than ever—to the point that some call “New Coke” a marketing conspiracy. If your errors can be turned on their head like this, you’re doing pretty well!
- Econometrics is the study and application of statistics as applied to economic data
- Bill Cosby pitches new coke: https://www.youtube.com/watch?v=yJoocpy7UBc
- Oliver, Thomas. The Real Coke, the Real Story. New York: Random House, 1986.
- Schindler, Robert. "The Real Lesson of New Coke: The Value of Focus Groups for Predicting the Effects of Social Influence." Marketing Research, December 1, 1992, 22-27. https://search.proquest.com/openview/4c9167b55ca610abd43435db371a1b70/1?pq-origsite=gscholar&cbl=31079
- "The Real Story of New Coke." The Coca-Cola Company. November 12, 2012. Accessed June 1, 2015. http://www.coca-colacompany.com/history/the-real-story-of-new-coke
- Extraneous or lurking variables are those unidentified or un-quantified variables that have an impact on a study’s dependent variable. Excellent research attempts to control for them.
- "The Real Story of New Coke." The Coca-Cola Company. November 12, 2012. Accessed June 1, 2015.
It should come as no surprise that Windows Server 2003 is entering End of Life on July 14th of this year. What should be surprising is that by some estimates, there are still over 10 million servers running this operating system. After its EOL, companies running Windows Server 2003 will have no support, and will receive no patches and no upgrades.
Without a doubt, opportunistic hackers will be doubling down on their efforts to find vulnerabilities and create exploits for this now-12-year-old operating system. Once vulnerability is found, if you are still running this OS, you will be unable to protect your servers from this threat.
There are many reasons why a company might still be running Windows Server 2003, not least of which is that it has proven to be a stable, reliable operating system that continues to run well on older hardware. Other reasons may include legacy applications and budgetary constraints. Whatever the reasons may be, continuing to run Windows Server 2003 exposes your applications, data and network to a significant level of risk.
Archetype SC can offer several services to help with your migration, making it as easy and cost-effective as possible. If a migration isn’t possible, we can help you protect your legacy systems by securing them through security assessments, network segmentation and network monitoring systems. Please see our services page for more information on how we can help you deal with this serious threat to your business.
Is Big(ger) Data Really What You Need for Better Results?
The past couple of weeks, I have been thinking about “big data,” and its growing popularity*. Prevailing opinion seems to be if your company does not have and use it, you are falling behind and will soon be out of business. The problem is, what is big data and how is it going to help your business?
Big data is a term used to describe many different things, which makes it an awkward term to work with. I suggest that a working definition of true big data is “data sets so large, and likely changing so quickly, that they are not feasible to conceptualize.” In other words, if a data set is too big and/or rapidly changing for you to wrap your mind around, it has the characteristics of big data.
Colloquially, the term “big data” has come to be a stand in for the term “analytics.” By this definition, big data refers to performing analysis, likely including visualizations, on any data set, regardless of size.
These two definitions are miles apart; this month, I am addressing the first definition. There are companies where opportunities exist to greatly benefit from a very effective big data solution. I suggest below a few questions to ask before you make the decision to commit the hardware, software, and people resources to implement this type of big data solution. These are not meant to discourage or dissuade you from implementing a solution—simply to help you think through the process so you get the most out of your investment.
- What questions do you hope to answer using big data? Projects I have been a part of have often started with only a nebulous idea of the problem to be solved or question to be answered. This is an ineffective approach to implementing a big data solution. While it is true that a big data approach means that there can be greater uncertainty or ambiguity in the questions, it is most effective to start with something. Without a question to be answered, a data scientist does not have a clue where to start, and your big data is of little more use than a phone book with no names attached to the numbers.
- How have you tried to answer your question already? A great analyst can do amazing things with even limited data sets. If your in-house team is stumped, it is often significantly more cost efficient to contract a team of outside experts to either produce the solutions or train your staff to produce them. Too often companies add complications needlessly.
- What future value will you gain from this investment? Every project I have consulted or directly worked on has started with this question. A great salesman is going to try hard to convince you that you need big data capabilities and you are missing opportunities that with it you would capture. Test this. Ask yourself, your team, anyone who will listen and give a thoughtful answer—what would we gain in the future from this?
Answering these three questions is not easy. An excellent consultant can help you through this process and help you decide which type of “big data” you need. Your answer may well lead you to a Hadoop server with big data analytics tools, and it may lead you to finding more effective analytics tools to work with what you have. Whatever your answers, I am confident the right solution will yield you significant returns.
Archetype SC has analytics experts who can help no matter what stage of the process you are in. We can provide support to your analytics team, add value to your current tools, or even be your analytics team. If you already have a big data solution, our experts will guide and assist you in getting the most out of it. Archetype SC…We Do Complicated.
*This idea was sparked by the podcast “Digital Analytics Power Hour,” specifically “Big Data—What an Executive Needs to Know.”