fbpx

For small businesses, gathering and analyzing data is an action that has been far too costly, complex, and time consuming to merit a serious investment. There are more than 30 million small businesses in the U.S., but nearly 60 percent say they lack the skills to implement a data analytics solution into their business strategy.

With no strategy involving data analytics, small businesses miss out on valuable information about their customers, operations, and competitors that can lead to increased revenue and better market position.

In recent years, more low-cost, budget-friendly solutions and a focus on smaller businesses have made data analytics an option that can be employed by any size business. In fact, small businesses have an advantage in the data analytics process, as they are more flexible to make changes based on trends in data, enjoy a much faster speed to market, and can foster closer relationships with customers.

For small businesses, data can unlock insights in these five areas, which can lead to increased sales, customer loyalty, and improved operations.

Spot trends in the market

  • The easiest way to be ahead of the curve is to know what’s coming next.
  • Measuring factors like the time of year, weather, or the current and future economic state along with monitoring online interactions can bring an information flow to know how and why demand will change.

Learn about your customers

  • Proper data analytics unlocks crucial information about how your customers shop, including information about what they buy, what they’re in the market for but haven’t purchased, and what reasons they do or do not recommend your business.
  • Customer feedback surveys, social media, and online forums can aid in finding this information, which can be utilized to transform the customer experience.

Understand your competition   

  • Monitoring what your competitors are doing well and where they are struggling can give your business an advantage in marketing and sales.
  • Using online resources can generate information on the popularity of products and brands, while an analysis of social media can show how your competitor is performing at the customer level.

Ensure operational effectiveness

  • Optimizing operations within your organization is an easy way to save money and time, while performing tasks at an improved rate.
  • Any business process that generates data can be improved – from manufacturing machinery to retail stocking information and supply chain management.

Improve on the past

  • Utilizing data from previous years can help identify long-term growth or regression and effectiveness in sales and marketing strategies.
  • With more data being created everyday, a look to the past can provide a snapshot of where your business has performed well and where focus needs to be shifted.

Making the move to using location-based analytics can be a big decision for your business.

On one hand, using this technology can unlock powerful insights.

Knowing how people move through your facility, how long they are there, and where they came from provides you with critical information to improve the customer experience and become more efficent in your operations.

On the other hand, tracking your customers’ movements can open your business up a whole new set of decisions about how to use and collect user data.

With new regulations about consumer data privacy, knowing what’s allowed — and what’s right — when it comes to collecting data can be confusing.

By opting to use A2 Analytics from Archetype SC, for location-based tracking and insights you can avoid these concerns. Here’s a look at how our technology compares to most when it comes to data privacy:

How most technologies use data

Whenever a consumer downloads a new app or signs up for a service, businesses put terms and conditions in place to cover all their bases in a legal sense.

For many applications and technologies, this giant wall of text states that the consumer will have data entrusted to the company, including some Personally Identifiable Information (PII) that the company owns and can do with it what they want.

Most consumers breeze right through the terms and conditions, scrolling to click “Agree” without taking the time to fully understand what they’re subjected to in exchange for the service.

Because the company owns their data, these consumers may have PII, location data, or anything gleaned from their device sold by the service they use.

How data privacy is changing

With regulations like GDPR and the California Consumer Privacy Act going into efffect in recent years, consumers are more in tune than ever with how companies use their data.

For businesses not in compliance, there are heavy penalties and sanctions that force the issue of data privacy to be at the forefront of dealings with consumers.

Thanks to these actions, users are becoming more likely opt out of having their data shared, and are now aware that they have the right to see what a company has done with their information.

Even as businesses move to update privacy policies and introduce more transparency in their data usage, many — including those in the location-based tracking space — have been slow to adapt.

How A2 Analytics handles data privacy

A2 Analytics uses passive sensors to pinpoint devices within a given area, but does not capture any Personally Identifiable Information (PII) in the process.

Using location data, combined with cloud computing and machine-learning algorythms, A2 Analytics gives a holistic look at the movement of people via the signals put off by their devices.

It cannot, however, capture information like phone numbers, contact information, or any other associated information about the owner of the signal.

Because A2 Analytics uses anonymized data, it is able to provide insights about the movement of individuals in a facility without the need for opt-in or an app on the users devices. This allows for higher capture rates, typically above 85% of all individuals in the space.

This high capture rate allows for highly accurate information about your space with minimal impact on an audience.

January 2020 will bring changes to data privacy and security rules for businesses operating within, or interacting with residents of, the state of California.

The California Consumer Privacy Act is the first of its kind in the U.S. It represents a sweeping set of laws that affords its residents information on what personal information has been collected on them, with whom it has been shared, how to delete it, and how to prevent the sale of such data. Compliance with the California Consumer Privacy Act will force businesses to be more transparent with data collected on consumers while simultaneously allowing consumers to hold businesses accountable for their treatment of consumer information.

What is the California Consumer Privacy Act? 

Although it’s called the California Consumer Privacy Act (CCPA), the regulations have wide-ranging impacts in the United States and beyond. Much like GDPR in the European Union impacted American companies and consumers, so too will the California Consumer Privacy Act.

To fall within the jurisdiction of the California Consumer Privacy Act, businesses must work in the state of California or collect personal information on residents of the state. Additionally, businesses must fall under one of the following criteria:

  • Have at least $25 million in annual revenue
  • Possess data on more than 50,000 consumers, households, or devices
  • Earn more than 50% of business revenue from selling personal data

Those businesses not meeting the above-listed criteria will not be largely impacted by the CCPA, but those meeting even just one of those have a lot of work to do.

The California Consumer Privacy Act is broad in scope, substance, and enforcement, covering new forms of data like internet browsing history, metadata, and IP addresses. It also redefines what a sale of data “looks” like, stating that data does not have to be given in exchange for money, but expands the definition to include anything “valuable” to the holder of the data. Essentially, trading data for goods or services are covered under the California Consumer Privacy Act.

Companies looking to comply with the California Consumer Privacy Act will not find a wealth of information within the act itself. In fact, there is no roadmap to compliance given by the state, rather just some general ideas of what businesses will be required to do and timeframes around those actions.

What does my business need to do?

First: don’t panic.

The California Consumer Privacy Act goes into law on January 1, 2020, but you’ve got plenty of time to determine what compliance looks like for you. Six steps are recommended for immediate implementation in order to make compliance easier:

  • Update Privacy Policies
    • Much like the rush of updates and emails that came after the European Union’s GDPR regulations took effect in 2018, privacy policy updates and their accompanying notification emails will likely flood our inboxes in 2020.
    • Update your privacy policies and notices to account for the necessary additions of what personal information is collected or sold, along with providing information about opt-outs from the sale of personal data.
    • Create either a policy to specifically cover California residents to couple with current policies; or create one wholesale policy to cover all consumers.
  • Update Data Stores and Business Processes
    • Included in the California Consumer Privacy Act regulation is the requirement to maintain a data inventory to track data processing activities such as:
        • Business processes
        • Third parties with data access or transferal of data to third parties
        • Products, devices, and applications that process consumer personal data
    • The data inventory or database must track every consumer right’s request.
  • Implement Procedures to Maintain Consumer Rights
    • Certain consumer rights have been guaranteed by the California Consumer Privacy Act, including the rights of access, request, notice, and knowledge about personal data gathered by businesses. Consumers will be afforded the power to see and remove:
      • personal information collected,
      • the sources from which the information is gathered,
      • the purpose for gathering the information,
      • the categories of other parties with which the data was shared, and
      • the specific personal information gathered about the consumer by the business.
    • Businesses may provide personal information to a consumer at any time but do not have to provide requested information more than twice in a 12-month time frame.
  • Update Security Measures
      • An easily overlooked regulation of the California Consumer Privacy Act is the responsibility of the business to protect personal data with “reasonable” security. For many organizations, this includes performing a risk analysis and remediating high-risk vulnerabilities to maintain a baseline of security.
  • Make Changes to Third-Party Agreements
    • Third-party data processing will need an updated contract with requirements including:
      • creation of vendor data inventories,
      • use of due diligence questionnaires,
      • providing records of the processing; requiring the syncing of consumer response processes; requiring onsite assessment and auditing; and requiring mapping of the specific data elements shared with each third party, including designating those transfer that qualifies as selling.
  • Train Employees on the New Regulations
    • At a minimum, any employee handling consumer inquiries for data collection and personal information must be informed of all requirements.
    • It is recommended that more in-depth training on the California Consumer Privacy Act occur at all businesses dealing with the new regulations.

Penalties for Non-Compliant Businesses 

Under the California Consumer Privacy Act, penalties are based upon unauthorized access incidents – be that breaches, exfiltration events, theft, or unauthorized disclosure due to poor security procedures and practices.

Fines will range from a maximum fine of $2,500 per violation for non-civil cases and a maximum of $7,500 for each violation in suits brought by the California Attorney General.

The intent is a critical component of each fine category, as the $2,500 fine is for non-intentional violations, while the $7,500 would be the maximum for intentional actions.

What are my next steps?

The California Consumer Privacy Act is more intensive than GDPR, requiring companies to take additional steps to ensure customer data is secure.

Most companies will need to consult with experts in data management, cyber security, and network security to ensure all aspects of the California Consumer Privacy Act are met before the regulations go into place.

The penalties and potential for embarrassment from a breach are strong and place an extraordinary amount of responsibility on businesses to keep data safe.

A partner like Archetype SC, with expertise in data, cyber security, and database management, is an excellent resource to answer questions and provide consultations on California Consumer Privacy Act compliance.

Traveling around Thanksgiving is almost as much of a tradition as the turkey and stuffing that will be passed around the table, football on TV, and Black Friday shopping that will (hopefully) follow a long nap.

No matter how you’re planning to travel this week, through the air or on the ground, AAA is projecting for plenty of delays and the busiest travel season in more than a decade.

“Consumers have a lot to be thankful for this holiday season: higher wages, more disposable income and rising levels of household wealth,” said Bill Sutherland, AAA Travel senior vice president in a press release. “This is translating into more travelers kicking off the holiday season with a Thanksgiving getaway, building on a positive year for the travel industry.”

Traffic for automobiles, which encompasses most travelers, is projected to be up about 5% with 48.5 million expected to travel. Air travel is also expected to be higher, with a growth of 5.4% and over 4 million expected to hit the skies for the long weekend.

While air travel doesn’t have the same traffic jams and congestion of car travel, plenty of frustration exists in the lead up to boarding an airplane. The stresses that come with air travel are the unknown wait times that come at the security line, baggage area, and even the restroom. Getting to the airport hours before your flight is scheduled to take off, only to stand in line or sit at a gate awaiting your turn to get to Aunt Marge’s “famous” green bean casserole and the cousins you haven’t seen – or thought about – since last Thanksgiving.

How can that experience be improved, so that the traveling doesn’t crack the top 5 list of things you’re dreading about Thanksgiving?

Armed with more information, such as the expected wait time at TSA, how long it takes to walk from the security line to the nearest Starbucks, or even the fastest route through the concourse based on traffic patterns will create a better passenger experience.

One way to do so, using A2 Analytics technology, which can be integrated with an airport’s mobile app to give passengers the power to decide when they want to get to your facility before their flight using expected TSA wait times. Heat mapping can show how crowded a restroom facility is, while walking routes give passengers accurate, real-time data on how long it will take them to get to a restaurant, shopping center, or rental car facility.

Your traveling experience doesn’t have to be a hurry up and wait scenario that leads to more arguments than thankful moments. Take the stress out of travel by requesting your nearest airport facility utilize A2 Analytics to optimize the passenger experience.

If not, perhaps you should consider getting it there.  The current cloud computing offerings provide flexibility and agility that are hard to match in traditional on-premises infrastructure.

Here’s a few opportunities to consider:

  1. Move custom application development to the cloud when you need to accelerate your development pace. Cloud computing offers on demand access to development tools and environments not limited by your in house IT resources.
  2. Extend your applications to mobile and web devices to expand your customer base. Cloud computing offers the scalability that web and mobile apps demand when customer volumes grow or spike, while maintaining the enterprise level security and availability you require.
  3. Leverage the cloud and your on-premises computing to complement each other. Some examples are: store your corporate data on-premises, and use cloud computing to do the analytical processing on it, develop and test applications on the cloud, but deploy them on-premises, provide disaster recovery in the cloud for on-premises applications.
  4. Cloud platforms provide database as a service that automates database administrative tasks that free your IT resources to focus on other business priorities.

ArchetypeSC can assist you in exploring these and other opportunities to leverage cloud computing in your organization.

Headlines tout messages like Data is the New Oil  [1]” and tell us that a “Data Scientist [is] the Sexiest Job of the 21st Century [2].” These messages are trumpeted from the rooftops and have convinced executives that not only do they need data, and lots of it, they need someone called a data scientist to figure out what it all means. These data scientists are lauded as wizards, able to magically coax insights, knowledge, and wisdom from data.  Recently I lauded the role of the analyst in analytics, but I worry that combined in the cacophonous call for data scientists, the message may be misinterpreted or tainted. Let me pull back the curtain on the data wizards of Oz and explain what I mean when I say you don’t need a data scientist.

Before I go any further, let me be clear: there is an absolute need and role for experts in data and business analysis to help you get the most out of your data. Without expert advice, assistance, and support, you will not get the benefits you would otherwise realize. But most companies should think twice about either investing too heavily on building internal specialization or relying solely on a cadre of data scientists to provide all analysis and insights. The path that led to this mindset was a long one, which is why it will be so difficult to displace.

Twenty years ago, the world of data analytics had very high barriers to entry. Computing power and storage were at a premium and to use either required detailed and in-depth knowledge of coding. The language R, released in 1993, provided one of the popular frameworks for statistical analysis and is still in use today. Scala and Python offer alternatives, but in each case, at least a fair amount of coding knowledge is required to be able to use it.

More recently, the world was introduced to Hadoop, Spark, Hive, and a litany of other novel concepts, platforms, and products. The power of these dwarf earlier tools, making analysis of terabytes and petabytes possible. Largely, however; to use these tools, one still had to have very high levels of technical proficiency. Hence, the need for data scientists.

Gradually, the C-suite has become inured to the fact that to gain any benefit from data, a team of data scientists must be employed, and to really gain benefit, brought in-house. I have attended conferences with sessions dedicated to “bringing your analytics in-house” and “why you don’t want to outsource,” but these only exacerbate what I suggest is the problem.

Insights can only be gained, according to this proposition, from the great and glorious data scientist. It suggests that these experts must be part of the staff, and that only these oracles of wisdom can dispense data driven insight to the business. This is wrong-headed, misguided, and just plain lazy.

Insights and analytics are no longer the domain of the few. With tools like IBM’s Watson Analytics, it is the time of “citizen analysts.” [3] Many tools no longer required extensive coding knowledge to perform analysis. Insights are delivered in an understandable and actionable format. Data exploration is possible for the many, and questions never before thought of can be asked and answered—almost by anyone.

Maybe I was wrong in the title of my post. It probably should be called “You don’t need a data scientist—you need every employee to be a data scientist.” Successful organizations and businesses will begin an era of democratized data and insights; a model where any can ask questions and get answers will become the norm. Frontline employees to C-level executives will have access to tools to find better ways of doing business through their data. There is still an important role for specialists and experts to perform complex computations, develop algorithms, and implement machine learning code within an organization. My suggestion is simply that all employees be empowered through tools to gain the insights to move forward. Innovation and insight can come from anywhere in a company, and all should have the tools to help them disrupt the sometimes arcane existing models.

Implementing such a starkly different philosophy is complicated. A partner already expert in tools like Watson Analytics and with traditional big data and analytics tools can help provide the guidance, training, and on-going support absolutely vital to your success. Half-hearted or ill conceived attempts will result in very costly failures and resentment among employees. Archetype SC has the knowledge, resources, and experience to provide the support you need for success in your complicated data journey, from early stage conception to on-going support. Archetype SC: we do complicated.

I would love to hear your thoughts about my ideas presented here; do you think I’m right? Wrong? Do you have examples of how a democratized data culture has worked for you or how it has hasn’t? I welcome civil discourse and will engage thoughtful responses. You can reach me at Patrick.Nord@archetypesc.com.

[1] To illustrate how ubiquitous this adage has become, each word in the quote links to a different article with it as at least part of the headline. Sources range from highly credible to more marginal on purpose. Inclusion of these links in no way implies that I agree with the content; purposefully I have included some that does not meet my standard of journalism excellence.  To learn more about the origins of the phrase, I recommend reading: http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/

[2] I’m pretty sure my wife was terrified when the Harvard Business Review first published this headline in 2012. She had no idea that her mild-mannered, nerd of a husband had magically been transformed into a 21st century sex symbol. Fortunately for my marriage, it is still every bit as uncool, and unsexy, to be a math nerd.

[3] IBM has begun using this as one of their marketing phrases; I first heard it during an IBM Watson event, but I posit it will become a widely used and accepted phrase, adopted by other players in the analytics space.

A number of years ago, I was quite sick, but like so many people, still felt compelled to go in to the office i. I did the only logical thing—went to the doctor and got some pills to help me feel good enough to work. As I popped my pills during an afternoon meeting, a colleague looked at me and said “better living through chemistry, right?” And it's true. 80 years ago, when DuPont introduced its “better living” slogan ii, products created in a lab and the companies that made them were derided. In 2015, most people don’t think twice about using any of the thousands of products that the innovation and chemistry of DuPont (and others) have brought us iii. We can’t imagine our lives without these marvels—and most tellingly, they have ceased to be marvels and are just part of our lives.

 

In 80 years, data, and its products, will play a similarly omnipresent role in people’s lives, and the marvels it creates will simply be basic parts of our lives. Today, people are too often worried (terrified may be a better word) about the role of data. My own mum worries when I talk with her about my work with big data and predictive analytics. She once told me “I saw Minority Report, and I don’t want the government or companies knowing all that about me.” There is a fear of the unknown and a significant misunderstanding of what the role of data is—but as with any new technology, as people come to rely on and use it, it becomes an indispensable part of our lives.

 

Power of Analytics in Your Life

 

When I was a senior in high school, I went on a road trip with some friends to visit a college. To guarantee we didn’t get lost, my parents let me borrow their GPS. We set our first destination in the unit and the strangely seductive robotic voice told us we would “arrive at our destination in 3 hours, 35 minutes” iv and off we went. A friend drove, and we took pride in shaving time off the estimate—now it estimated it would take only 3:20, now 3:10, and finally, we arrived under 3 hours. We spent most of the rest of the trip trying to race the GPS.

 

I compare that now to a more recent road trip. My wife and I set out from our home in Myrtle Beach, SC for Philadelphia, PA—573 miles. Remembering our high school and college road trips, we scoffed at Google Maps’ 11-hour estimate. With her NASCAR-esque driving, I was confident my wife could do it closer to 9, maybe 9 ½ including a couple quick stops. Google was right. How did we come so far?

 

The answer lies in Google’s use of data. Each smartphone user with Google Maps and location services enabled, sends small bits of data to Google, which is aggregated and analyzed. This is then combined with historical data, and can be used to accurately predict drive times v. Now I feel good if I can beat an arrival estimate by a minute or two.

 

The companies that will either continue to lead or become leaders in their industry are those who use or will use data effectively. Just like we have better living through chemistry, in the next twenty years, we will certainly have better living through data chemistry. Without an effective data and analytics strategy, your company will be like the GPS my parents loaned me years ago: outdated and bordering on obsolete.

 

If you need help sorting through your data complexities, give us a call. Whether it’s architecting an overall strategy, digging into your data for insights, designing analytics dashboards, or any number of other data related problems, give us a call. Our team are experts in helping companies of all sizes navigate the data dilemma maze. Archetype SC…we do complicated.


 

i "Survey Finds Being Sick Doesn't Keep Hard-Working Americans Out of the Workplace -." NSF. NSF, 19 Feb. 2014. Web. 03 Aug. 2015.

ii Shanken, A. M. "Better Living: Toward a Cultural History of a Business Slogan." Enterprise and Society 7.3 (2006): 485-519. Web.

iii Even if one actively tries to avoid using their products, the abundance in all aspects of our lives makes it nearly impossible.

iv This is exact time is used for illustration; I remember the time shown was over three hours, and less than four, but cannot place the exact minutes

v "The Bright Side of Sitting in Traffic: Crowdsourcing Road Congestion Data." Official Google Blog. Google, 25 Aug. 2009. Web. 03 Aug. 2015.

 

We continue to read headlines about “big data” and the power that it can have on almost any market, but it continues to be an amorphous entity without definition or bounds. What one person may deem as big data, another may simply refer to as a large data set. Definitions are important, and in a previous post I proposed two working definitions. I used the first working definition as the basis for my article last month that focused on questions to ask before or while implementing a big data solution. As a follow up, here are a few questions to ask when taking the first steps toward using your data.

There are many companies or organizations sitting on data that is not being used, is being underused, or is being misused. You can ameliorate the deleterious effects of these conditions, or at least understand how to by asking a few questions. It is wholly plausible that you may not have answers to the questions, or may not know how to use the answers—in which case, you would be well served to partner with an expert to help you get started. These questions are similar to those I recommend asking prior to a big data implementation but do have their own spin.

What do you hope to accomplish? This may seem like a simple starting point, but many people I have spoken with don’t have a business case or even a reason in mind to look at their data. Your goal may be general, “increase sales,” or specific, “increase sales by 50% in the southeast among 18-25 year-old males.” Often the starting point will be more general than it is specific and it will be refined.

What data do you currently have/ collect and what could you collect with little to no additional effort? Take a good hard look at what you have now—and realize that many of the programs you use everyday are actually collecting data in the background. An excellent analyst can help you find sources you may be overlooking, and can help transform some non-traditional data sources into usable data.

What outputs do you need? It is important to understand what you, or the decision makers in your organization need to see to gain value from the data. While it may be possible to find insights and answer your research questions, if it is not presented in a format that is easy for people to understand, it will not be used and will serve no function.

Above all, make sure that you have a plan. Trying to implement any analytical solution without one will result in failure. If you need help, it is important to bring in a consultant or analyst early. Spending money at the start of implementation may end up saving you money in the end.

I went to a private, Christian university where vices such as swearing were certainly discouraged, if not outright banned. After registering for my first econometrics1 class and quickly skimming my reading list, I was surprised (and secretly thrilled) to see Joel Best’s Damned Lies and Statistics as required reading. I fell in love; numbers, modeling, and the power of statistics to paint a picture fascinated me. More recently, I have come to love the data that is behind decisions. Bad data, or more precisely, bad interpretations of data, leads to bad decisions. Good data, or more precisely, good interpretations of data, leads to good decisions. This post focuses on bad decisions, next month I will highlight good decisions.

New Coke

In April, Coke lovers like me celebrated (or at least commemorated) the 30 year anniversary of the infamous “New Coke” decision with Bill Cosby, among others, pitching it.2 The Real Coke, The Real Story, by Thomas Oliver, gives a great history of the brand and the decision3. Coke was losing ground, he accurately argues. The now Pepsi challenge, in which consumers were challenged to pick their preferred beverage in a blind taste test decidedly was in Pepsi’s favor. Even in Houston, a city with a 25% market share advantage to Coke, taste test data indicated a very slim preference for Coke (p. 34). Coke needed a change, or so they assumed.

To ensure the success of the new formula, extensive market testing was done. Robert Schindler notes that coke spent more than $4 million dollars “and included interviews with almost 200,000 consumers” in their research4. The result was clear—the new formula was preferred “61% to 39%.” The decision seemed clear as well—discontinue old coke in favor of new. So why did it ultimately flop, being labeled the “marketing blunder of the century”5. Data was used incorrectly.

In any use of data, it is imperative to identify extraneous or lurking variables6 and to attempt to control for them. In Coke’s attempt to deal with the issue, the company failed to quantify “the bond consumers felt with their Coca-Cola,”7 the company states in a retrospective on their website. Emotions, preferences, and tastes are notoriously difficult to quantify and tabulate, but an attempt to control for them is important.

Avoiding a New Coke Sized Mistake

Bad interpretations and skewed data are two simple reasons Twain classically grouped statistics with the two other types of lies—lies and damned lies. As you implement analytics and data driven decisions, be careful of the implications of bad data practices or misinterpretations.

First, make sure your data is clean. Are you seeing exactly the result you want? Data is easy to aggregate and cherry pick to give rosy results. Question anything that seems just a bit too good.

Second, ask the simple question “what else may have an impact on this result.” A colleague of mine taught me the rule of 5 whys; when a decision is made, especially one that may not make a lot of sense, ask “why” five times. If there are five good answers, it probably is a good decision. If not, it will become obvious. I would suggest the 5 “what elses” rule. Keep searching for the what else until you have built a really good model. Generally, I believe that the bigger the import of the outcome, the more time should be spent creating and refining the model. Your final algorithm is likely to be beautifully simple—but the process to get there likely was not.

Third, don’t be afraid to ask for help. It’s not always popular or easy, but even experienced analysts and data scientists get stuck or have their own inherent biases. It can be invaluable to ask a friend or colleague not working on the project to take a look at your numbers and conclusions. There can be great power in a “sanity check,” even if it comes from someone other than a traditional expert!

Fourth, admit when you get it wrong. The conclusion of New Coke’s story, with the re-introduction of Coke classic and phase out of New Coke is a fitting conclusion to these tips. Recognizing a mistake can be painful, but when data leads you down the wrong path, turn around as quickly as possible. Evaluate and re-evaluate the data that lead you to make your decision, and rectify errors. Coke did this, and their “blunder” turned into an amazing opportunity and the company exited the situation stronger than ever—to the point that some call “New Coke” a marketing conspiracy. If your errors can be turned on their head like this, you’re doing pretty well!


  1. Econometrics is the study and application of statistics as applied to economic data
  2. Bill Cosby pitches new coke: https://www.youtube.com/watch?v=yJoocpy7UBc
  3. Oliver, Thomas. The Real Coke, the Real Story. New York: Random House, 1986.
  4. Schindler, Robert. "The Real Lesson of New Coke: The Value of Focus Groups for Predicting the Effects of Social Influence." Marketing Research, December 1, 1992, 22-27. https://search.proquest.com/openview/4c9167b55ca610abd43435db371a1b70/1?pq-origsite=gscholar&cbl=31079
  5. "The Real Story of New Coke." The Coca-Cola Company. November 12, 2012. Accessed June 1, 2015. http://www.coca-colacompany.com/history/the-real-story-of-new-coke
  6. Extraneous or lurking variables are those unidentified or un-quantified variables that have an impact on a study’s dependent variable. Excellent research attempts to control for them.
  7. "The Real Story of New Coke." The Coca-Cola Company. November 12, 2012. Accessed June 1, 2015.

Is Big(ger) Data Really What You Need for Better Results?

The past couple of weeks, I have been thinking about “big data,” and its growing popularity*. Prevailing opinion seems to be if your company does not have and use it, you are falling behind and will soon be out of business. The problem is, what is big data and how is it going to help your business?

Big data is a term used to describe many different things, which makes it an awkward term to work with. I suggest that a working definition of true big data is “data sets so large, and likely changing so quickly, that they are not feasible to conceptualize.” In other words, if a data set is too big and/or rapidly changing for you to wrap your mind around, it has the characteristics of big data.

Colloquially, the term “big data” has come to be a stand in for the term “analytics.” By this definition, big data refers to performing analysis, likely including visualizations, on any data set, regardless of size.

These two definitions are miles apart; this month, I am addressing the first definition. There are companies where opportunities exist to greatly benefit from a very effective big data solution. I suggest below a few questions to ask before you make the decision to commit the hardware, software, and people resources to implement this type of big data solution. These are not meant to discourage or dissuade you from implementing a solution—simply to help you think through the process so you get the most out of your investment.

  • What questions do you hope to answer using big data? Projects I have been a part of have often started with only a nebulous idea of the problem to be solved or question to be answered. This is an ineffective approach to implementing a big data solution. While it is true that a big data approach means that there can be greater uncertainty or ambiguity in the questions, it is most effective to start with something. Without a question to be answered, a data scientist does not have a clue where to start, and your big data is of little more use than a phone book with no names attached to the numbers.
  • How have you tried to answer your question already? A great analyst can do amazing things with even limited data sets. If your in-house team is stumped, it is often significantly more cost efficient to contract a team of outside experts to either produce the solutions or train your staff to produce them. Too often companies add complications needlessly.
  • What future value will you gain from this investment? Every project I have consulted or directly worked on has started with this question. A great salesman is going to try hard to convince you that you need big data capabilities and you are missing opportunities that with it you would capture. Test this. Ask yourself, your team, anyone who will listen and give a thoughtful answer—what would we gain in the future from this?

Answering these three questions is not easy. An excellent consultant can help you through this process and help you decide which type of “big data” you need. Your answer may well lead you to a Hadoop server with big data analytics tools, and it may lead you to finding more effective analytics tools to work with what you have. Whatever your answers, I am confident the right solution will yield you significant returns.

Archetype SC has analytics experts who can help no matter what stage of the process you are in. We can provide support to your analytics team, add value to your current tools, or even be your analytics team. If you already have a big data solution, our experts will guide and assist you in getting the most out of it. Archetype SC…We Do Complicated.

*This idea was sparked by the podcast “Digital Analytics Power Hour,” specifically “Big Data—What an Executive Needs to Know.”

 

Imperatives for success

An IBM White Paper with the above title got my attention because of the heavy emphasis on software development tightly coupled to the business value it delivers.

Thinking of software development as a supply chain process seemed intuitive and it was easy to buy into the concept.

Industry trends: The move toward software supply chains

Software delivery is a fundamental business process, one that benefits companies, public sector organizations and a growing host of institutions that rely on the power of digital technology.

Software enables people to collaborate and innovate in the work place, to automate routine tasks and processes, to make better decisions and to become more competitive as a result. Software fuels the use of the Internet and web to socialize and to gather information, and software is increasingly part of every device, from mobile phones to power plants and spacecraft.

Organizations can acquire the sophistication, competence and skill they need for software development through package applications, open source components and outsourcing. As a result, companies are creating new models for sourcing software that promise more consistent and reliable products. Some companies use software factories and outsourcing vendors to reduce or distribute risk, to lower costs, to gain expertise and to add value where the company on its own cannot. Others rely on commercial off-the-shelf software to reduce the time it takes to get products to market, and some have even turned to “crowdsourcing” to exploit innovation by the many.4 For example, businesses use application packages for HR, finance, sales and distribution. They then rely on captive development centers to produce new software or to customize packaged applications more cost effectively. In addition, open source software supports IBM Software 3 in-house development projects such as internal, web-based applications (for example, a scheduling system). All of these elements that contribute to the production of software can be understood as parts of a complex, interdependent supply chain, one that must be managed effectively.

IBM draws parallels from successful traditional supply chains and applies them to the “software development” supply chain. The example used is one of the possible software supply chains, outsourcing.

To achieve similar successes with the software supply chain, the goal for all participants is a greater ability to manage the value that they are providing and receiving and to cultivate trusted business relationships as successes validate the business model.

Achieving this goal requires moving from measuring output only to also measuring actual business value. The issue of trust becomes prevalent when no good means of monitoring value exist. For example, if you are an acquirer, how can you trust that your supplier will deliver what you need? If you are a supplier, how can you trust that your acquirers will not endlessly rescope the requirements while you spend countless hours on prototypes, try to help them understand their true needs and bear the burden of costly changes?

The remedy to date has been to provide overly detailed specifications of what needs to be produced before signing a contract, focusing on output over value. Such a focus increases bureaucracy, while stifling innovation, and usually leads to one of two extremes: a static, defensive, contracting approach to controlling the relationship or a weak, open arrangement based on measuring time and budget without clear incentives.

IBM believes that, to solve the problems of value and trust and to bring efficiency and high quality to the supply chain, companies should apply a consistent set of principles. IBM refers to these principles as the imperatives for securing value and trust.

Here’s the explanation of how the business and its software development supply chain work together to deliver software that brings value.

Three imperatives for securing value and trust

The three imperatives for securing value and trust can apply to the gamut of software supply chains, including outsourcing, off-the-shelf applications (whether open source or commercial) and captive development centers.

These imperatives are:

  • Balance governance with agility.
  • Increase visibility.
  • Deliver measurable business value.

Decisions needed to effectively manage software development are business value driven, based on clear and simple business value metrics.

Steering projects to the desired outcome

All parties, including lines of business, development, operations, vendors and captive development centers, must get together at key times to make decisions. For example, they must decide which projects to fund, whether projects require course correction and when projects are ready to be deployed. Business-value driven milestones that provide visibility about the true status of the project and target objectives can steer the application development effort to a favorable outcome.

Providing measurable business value

Using simple ways to measure and monitor business value throughout a project or program is an absolute necessity. Clear, detailed traceability, between the needs of the business and the software requirements embedded in statements of work, facilitates the coordinated delivery of multiple software components that is part of providing value.

The above excerpts provide a summary of how tightly coupled software development and the business case and business value metrics need to be. This is absolutely the linkage that should exist between the business and the software supply chain.

However, accomplishing this linkage has been a challenge that I’ve seen in many of the development projects I’ve managed. Small and mid-size businesses as well as Fortune 500 companies often struggle with defining the business case to justify the software development they feel is needed.  Assigning value to the deployment of new software can imply that the business is then accountable for insuring the value is delivered to the business. Business leaders can be reluctant to become accountable for an increase in market share and margin, or decrease in costs that can form the basis of a software development business case. When that happens, the business value metrics that would guide the software development are not defined or are incomplete.

Businesses that are culturally capable of establishing the business case and value metrics for software development will be the ones that can treat software development as a supply chain process. Business that struggle with defining a business case and value metrics will not be as successful in their software development projects. 

cross
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram