British Government create Major Fraud Incident by using IT to save on human costs! 20+ million pounds lost.

Current benefit scam (Universal Credit) in the United Kingdom has yet again shown how any approval for money given out via online validation is risky. Since the money was provided quickly by the government, scammers jumped on the chance to coax personal information out of people and even to make up fake personal information so they could get access to the most money possible. Current estimates are that over 20 million pounds has been stolen by fraudsters.

To help gain people’s trust, scammers used social media heavily to sell the fraud. Scammers also did the online application so any warnings of what was being signed up for were not visible to the victims of the fraud.

If we look at the original honorable goal of the online application, it was to provide people with money until their benefits were reviewed / approved as the approval process was taking 5 weeks or more. Government thought it would be great to give people money (in a the form of a loan) that would be later paid for by the claimants benefits (if approved) or repaid by the claimant if not approved for benefits. This way, the claimant could avoid cash flow issues. Really, the problem was that the Government did not have enough staff to process the claims quicker. The IT solution along with the loan was a cheaper approach that was badly implemented.

What was completely missed by the IT department working for the British Government when setting up the solution was the implementing of all the rules that human employees would use to process an application. This was a complete failure on the part of the business analysts involved in this software development and has ended up costing the UK government millions.

Some of the functions a human employee would have done in processing the application:

  1. Is the applicant aware of what they are signing up for? – Scammers did the application on behalf of the applicant so the applicants never knew fully. Scammers also used social media to describe the money as coming from a grant and not a loan.
  2. Do I have confirmation that the applicant knows what they are signing up for? As the applicants were not on the web site, they never confirmed what was being done. Victims have found out after the case what was really done.
  3. Do I have some reliable proof that the claim is accurate? Scammers submitted whatever they wanted to state in the claim as the validation was done over the process of 5 weeks after the money had been sent.
  4. Does the applicant know the amount and fees (if any) associated with the claim? Scammers claimed a fee to fill in the application on behalf of the claimant but there were no fees in reality.
  5. Does the applicant know who is supposed to do the claim? Scammers jumped on the opportunity to do the claim as their was no biometric validation (as compared to being interviewed by the government employee) as it was done online.

Here are the functions that we should watch for in our projects that require special attention when we are providing money quickly based on online validation only:

  • We need to guarantee that the party receiving the money is who they say they are and they know exactly how much is their money. This could be done by ensuring they are using an already validated bank account. In this fraud, a lot of the victims actually received the money to their bank account but thought they were obligated to pay the scammers part of it as the scammers had completed the online application.
  • We need to guarantee that the applicant is the one completing the application online so that the applicants are aware of what they are doing. Any warning / informational messages associated with the claiming / providing of money as part of an online application, we have to be 100% sure that the party to receive the money (legally tied to the money) has seen them! A web page pop-up with click of “Yes” along with capturing of IP address is not enough to verify that the person who needed to see the warning / informational message actually saw them. We need to guarantee the person at the computer on the web site is the valid party involved. This is where biometric information or a chip style reader (as used in credit cards) for an identity card would come in handy. Some companies use validated phone numbers with text messaging to achieved this however if the phone number is hacked or changed by the scammer this does not work. With the current fraud, it is several weeks before the Government works out that the claimant never used the web site to complete the application and thus were not aware of what was being signed for.

In summary, the British Government got themselves into this position because they did not want to hire more staff to process claims quicker. It is the classic case of relying on Information Technology to speed up a process on the cheap without due considerations of the risks involved or the human functions being replaced by the computer. Whoever did the analysis and design of this payment solution was incompetent beyond belief.

Process improvement through nudging

As business analysts, we are called in often to look at ways to improve the current process. Measurable improvements desired by the business to justify the process improvement could be in:

  • Quality
  • Reductions in costs
  • Increase in processing per hour

Any process to be improved has a certain amount of dynamic variability to it. From a high level math perspective, the processes are looked at as “dynamic resource allocation” because of the variability factor. By controlling the variability with nudges we can improve the process.

  • NOTE: With the advent of stronger AI, in the future we will see more reliance on AI to advise as to the best way to improve a process and it will be left up to the Business Analyst to help put AI advice in place.

What is “nudging” and how is it used to improve a process?

Nudging is where we don’t force a change of process or add new processes to improve process but instead nudge the behavior of the participants in a the current processes to get the results desired. A current example of this is where financial institutions offer rewards to customers if they go paperless for their statements. Going paperless improves:

  • Percentage of outstanding statements processed per hour as smaller printing backlog.
  • Speed of delivery as they are delivered in hours instead of days.
  • Quality in the sense that the statement does not get delivered to the wrong address, does not get damaged in printing etc.
  • Cost reduction as less mailing costs.

You can see from the 4 bullet points above, that a lot can be achieved by just nudging the customer in the statement process to no longer expect a paper statement.

So the next time you are looking at improving a current set of business processes, ask yourself if you can make improvements by “nudging” the current users of the business process in a direction that would support measurable improvements for less cost than force implementing changes or building solutions that have to manage many variables.

The Data Lake – understanding the concept

June 8, 2019 by · Leave a Comment
Filed under: Business Analyst Skills, Data 

As data capture has grown so have some of the techniques of handling the data. For about 10 years now, the Data Lake has started to appear in the business world as part of the data capture concept.

Originally when I started out, data was distributed all over the place with business analysts having to ask for extracts from various departments to get an overall view of the company. It was time consuming.

Next came the large data warehouse accepting in data from all over the company to a central store. However it could take years to get that data into the data warehouse. At one place I worked, it was a minimum of 2 years to absorb data into the data warehouse. Delay in getting data in was caused by the need to model the data and understand it completely before it could be absorbed. Data modelers would have to work out if new tables were needed and BAs would have to justify the business cost of storing the data. Add onto this that existing reports would be expected to use the data from the data warehouse and these reports would all have to be rebuilt to use the new data structure.

As companies have evolved to produce even more data, the data warehouse wait time was increasing significantly. Waiting for centralized data however did not tie in well with corporate strategy of being able to know what is going on around the company. At this point the Data Lake concept came into being. The Data Lake is basically a collection point for all data from around a company in any type of data structure. Data does not need to be refined to end up in the Data Lake. Good and bad data is collected. Visually the Data Lake term represents departments that generate data as streams that feed the lake.

As the data collects in the Data Lake, eventually some of it will make its way into the enterprise data warehouse based on need and cost justification. By creating a Data Lake approach, it has created a one source of data for people in a company to access. Data scientists can look at what is being captured and see if any of it is of use to what they are trying to analyze.

Pros of Data Lake:

  • Centralized repository of company data which in theory makes it easier to find data.
  • Quick to capture data into as not refined in anyway.
  • Allows the data source departments to focus on supporting their applications / business and not on providing formal data extracts that have to be absorbed by a data warehouse or other team.
  • Don’t have to wait on departmental availability of resources to get access to another department’s data.

Cons of Data Lake:

  • Resources have to be hired to support the collection of data into the data lake and the sharing of it.
  • Failure to get good searchable metadata on the data being store in the Data lake would prevent the data from being discovered at a later date.
  • Resources associated with the original data generation are not part of the Data Lake team which means the personal knowledge on the Data Lake team is limited to non-existent. Data knowledge is totally reliant on the metadata captured at the time the data is stored.
  • Useful and not so useful data is captured as the focus is capturing data.
  • Dependent on cheap storage to justify the large storage costs and the resources to support the physical storage / networks etc.
  • Secure data should not end up in a Data Lake due to risk that it may be exposed.
  • Not for operational reporting where reports have to be generated in 24 hours or less of data being created.

In summary, the Data Lake concept is just a fancy way of saying centralized raw data store created from data provided via different departments in a company. A Data Warehouse can pull data from the Data Lake for storage in the Warehouse at a later date once the need for it to be stored formally has been identified.

Are criminals and government fines driving new requirement methods?

We all have had the business partner who never quite tells us all that we need to know when working on a project but spare a thought for those who design solutions to defeat criminals. In their case, the criminal is not sharing what he does and most design is done in reaction mode.

One area that has got recent focus is Money Laundering. Financial Institutions that end up involved with Money Laundering not only risk loss of money and reputation but fines as well imposed by their governments or even other governments. The situation of identifying money laundering has gotten so out of control that nobody really knows how to define the complete requirements to identify money laundering.

Traditional requirement methods basically do not work anymore. With traditional requirement methods, the Business Analysts identify / capture the business rule and then implement it. Unfortunately the people who make the business rules are not the ones sharing it with us.

Criminals don’t tell us that if they do X, Y & Z then they are money laundering. The criminal’s desire is to float under the radar as normal customers. Their methods for appearing as ordinary customers have gotten so good that the people trying to create the rules after the fact to identify Money Laundering can no longer keep up. This puts Financial Organizations in a bit of a pickle.

Governments have made it so that Financial Institutions cannot just ignore the problem of Money Laundering and hope for the best. After all, if a Financial Institution goes belly up, it can affect a whole country. To avoid this worst case scenario, a government may force a Financial Institution out of business if they are not confident that the institution is compliant with laws. If you want to get business owners / board members to do something about a problem, threatening their business is one solid way to go about it and the governments know this.

For Financial Institutions to get round this issue of not knowing the business rules to implement to identify Money Laundering, they are turning to Artificial Intelligence (AI) to fill in the gap of knowledge. AI will scan through large amounts of data to learn, establish, monitor and update the business rules that identify Money Laundering or Potential Money Laundering. Systems will then implement the rules on the fly to freeze accounts, recover laundered money and notify government and law enforcement agencies.

While I am not a liberty to talk about the specific data being worked with, I can discuss what this means from a Business Analyst perspective. Data scientists and AI engineers will take over the role of capturing and implementing the business rules for identification of Money Laundering into the systems. Previously this was handled by the Business Analyst. However, before you cry about the loss of another piece of work for the Business Analyst role, new opportunities will open up:

  • AI needs data and lots of it. Business Analysts will be recruited to provide data interfaces into the AI machine. At least for the next little while. Eventually, the desire is to end up with more of a Web Crawler approach where the AI establishes new sources of data with little to no human intervention.
  • While AI will be good at identifying that an action is needed it will not be good at implementing the action (at least until we build the fictional “Skynet”). Business Analysts will always be involved with ensuring that the action is communicated to where it needs to be communicated and that any automatic response within an organization is performed . With the way Companies, Governments and Law Enforcement Agencies reorganize themselves on a regular basis it is unlikely that this will ever be a static solution. Given that changing environment, this should keep Business Analysts busy for a while.

What I think will be interesting in the future will be for the Data Scientists and AI engineers to be able to explain the reasoning of the AI for its decision that a particular event is Money Laundering. Eventually it could grow beyond their understanding. I can see a future where Business Analysts will be called upon to get the AI systems to pump out human readable reasoning and maybe that will be a new job task for us all.

In summary, criminals and governments are driving the need for AI to step in and generate IT requirements on the fly. This is to ensure that the criminals are kept in check and that businesses are not shut down by governments for not keeping criminals in check. While some BA roles will be lost around business rules capturing and implementing, other new roles will open up in support of the AI infrastructure and especially the output from the AI solutions.

Are we building the world defined by the movie “Gattaca”

March 14, 2019 by · Leave a Comment
Filed under: General News 

This post is about thinking how the project we are working on will impact the future of society as we know it. Quality of requirements we gather can impact society in ways later on that we did not think of. Most importantly, we need to always provide a secure back door where incorrect assumptions generated by computers can be overridden by humans otherwise we are at the mercy of the machines.

The 1997 movie “Gattaca” was about a person’s DNA being used to determine their potential in society. DNA would be used to work out what careers you would have access to. The protagonist of the story pays to use someone’s else DNA (adoption of identity instead of theft) to achieve their objective which currently they were banned from due to their inferior DNA.

If you think about the mechanics of the future life direction provided in “Gattaca”, it revolves around DNA having been previously analyzed to determine the career potential of a human being. Once the analysis was done, computers take control of processing an individual’s DNA to determine future worth to society. There is no need to have human involvement in the decision any longer as the formula is already developed and the outcome determined.

For those of us who work in IT we may encounter projects that are a cog in the works of this “Gattaca” future. These projects are efforts which seek to define a value for a human based on information received. Certainly at this point they are not focused on including DNA sampling to determine the value but that would seem to be only a matter to time. If we included DNA sampling that would give us the potential to look beyond the immediate prediction of value and include a future performance prediction as well.

Some of you may remember a 1995 movie called “The Net” where the issue was that the protagonist’s identity is erased causing her to have issues with doing anything in life. Having the computer store information related to our identity was the first step towards where we are now. What is significant in today’s world is how computers are adding points to that stored information that affect your value in society.

You may be asking at this point what IT projects that I work on would fall under this future “Gattaca” classification of determining human value? Below are some examples:

  • Human Resource systems – tracking of sick days and vacation days. Certain trends identified by the system can flag an employee as a risk.
  • Credit Scores. Speaks for itself and reliant on information that may or may not be correct.
  • Terrorist Name matching – how many Mr Smiths get delayed at the airport.
  • Computer Activity monitoring – not active enough and your employer can terminate you.
  • Grocery Store cards – are you buying healthy. Where does all that information go?
  • Job Interview software – companies are pushing more and more to remove the human from the initial interview loop and instead rely on a computer interview to screen applicants. Answer a question in the wrong way and you may never get passed the computer.
  • Job Search Engines – computer selects which resumes to be reviewed for potential interviews. If you don’t spend time trying to work out the current key words, your resume may never get viewed by human eyes.

Above just represents some of the projects that you may work on that determine the current value of a human. Underlying each one are formulas that are sold as: improvement in efficiency; reduction in costs etc.. for the organization that deploys the software. Side effect of course is that a computer now values the individual human based on the formulas used against the data received. While they don’t use DNA yet, they are certainly getting close.

We already know that in today’s world identity theft can give the stealing individual access to things that they would not have access to. That was the premise in “Gattaca” where the individual adopted another’s identity to achieve their goal. Identity theft would not be as easy if it were not for the computer storing points on an individual that determine their value to an organization and society thus opening or closing doors of opportunity. After identity theft, the victim may lose their place in society for a while if not permanently as their points total will be adjusted again by the computer based on the information generated by the identity theft. Victims are then tasked with reaching out to actual humans to correct what the computer is stating is valid. Humans tasked with correcting the information are at the mercy of the computers having a back door from which they can override the incorrect information.

So the next time you start to work on a project, ask yourself if you are building another “cog” in the “Gattaca” world machine and did you provide a secure back door override.

AI will end the need for IT Requirement Business Analysts

Will say up front that this is purely an opinion piece as I don’t plan to provide the textual references to back up the statements. Purpose is just to think about the evolution of technology that had led us to this point and the impact to the IT Requirements Business Analyst.

Historical speaking if we start at the Industrial revolution, machines were used to speed up production. Back then, the equivalent of a software programmer was a mechanical engineer who designed the levers; shafts and cogs etc.. to produce the desired result. More often than not, workers were required to keep the machines fed with raw material and to remove the finished product. You could think of the raw material as being the equivalent of data coming into a modern day data processing software and the end product being the finished use of the data such as reports / dashboards or account updates.

When the electronic computer moved into the office world in the late 40’s we started to see where the processing of data by human clerks was being replaced by the computer. The mission of the computer back in those days was to reduce the number of humans involved in processing data. At this point, businesses are not looking for the computer to give them guidance but just to allow data to be processed quicker and more cheaply. Designers of software could focus on the tasks already done by the human and create software and peripherals such as printers that replaced those tasks. This would be done by job shadowing to understand the process.

Moving into the fifties and sixties, computers became available that could be programmed with complex algorithms to allow data to be processed in a way that predictions could be made from the data. At this point, the designers are no longer thinking about replacing workers but instead leveraging the processing power of the computer to produce decisions related to business. While some of this you could argue was done in WW2 to break codes, those machines were specifically built for the task. What the more modern computers allowed was for programming languages to change the decision task to meet the current need. However the one handicap was the speed of the computer in those days. Decisions generated by the computers were not done in real time and the programming involved was complicated.

With the advent of the more powerful and useful computers that come out of the late sixties, we start to see where computers become part of real-time processing. Computing processing power and storage keeps increasing every year allowing for businesses to look at new way of saving costs / increasing investments by letting the computer streamline their processes beyond replacing people to even including the ability to expand their business. This is achieved by the development of new interfaces beyond the punch card of old. Now terminals are available that allow for direct access to the computer processor allowing for live updates of data – think airline ticket handling. In this period the designer was seeing how new tools available via technology could enhance rather than replace the current process. However even with all these advances, the use of the computer was still dependent on a designer working out the needs of the business and getting it coded. Software solutions were ridged and limited to the design parameters provided.

Even when we go into the turn of the century, the faster computers with more data handling are still moving along with limited software design that involves fixed parameters and limited interfaces for data collection.

Where we seem to make the evolutionary leap towards AI is when the processing power and data handling ability of computers crosses a threshold where it can consume non-human prepped data that is beyond just text. Previously, data processing power limited what a computer could do in real time. Now a computer can process not just plain textual data but also images, sound etc. and determine decisions based on this. It is like we have removed a prisoner from a small cell where the only thing they could see was text and their hearing / touch / sensing was deliberately disabled. Now we are in a scenario where computers can be educated to interface with humans on a natural level. All the designer needs to do is define the data streams (based on the context – driving a car for example) and the measures of success. AI can then learn to process the data.

Now before I talk about the impact of AI on the BA role, I want to break the role into two:

Role 1 is the BA that looks at business processes.

Role 2 is the BA that looks at interfacing IT with business processes.

Business process BAs (Role 1) are already heavily being replaced by the Product Owner role so while this BA will eventually not exist, the role has a chance of living on for a period of time with the advent of AI as a Product Owner. Eventually however even Product Owners will be replaced by more sophisticated AI solutions. Big risk though, is that businesses become clones of each other. An AI analyzing the marketplace may come up with the same opportunities as another business AI thus killing the market opportunity as both deliver the same solution at the same time making neither have an advantage over the other. While this happens with humans today, the occurrence is less as humans cannot deliver ideas at the 24x7x365 speed that AI can. Stock market meltdowns have already been shown to happen when multiple different stock monitoring software trigger sell decisions because a trigger event occurs. This will be the same issue when AI takes over the business Product Ownership.

Role 2 BAs that focus on requirements for IT design are most likely to be impacted in the very near term by the advent of AI. This role has already seen a significant amount of work being moved to UI designers and Data Warehousing Specialists (both of whom are at risk of being replaced by AI as well) that the amount of work left is limited. With the advent of AI, its is conceivably possible for the BA person to be replaced by an AI solution that interfaces directly with the business to produce either IT solutions or output that can be used to create IT solutions. For years this has been a dream of many companies. Easier to use software being the traditional way to limit IT expenses – think about how many business users use Excel for example without ever talking to a BA. AI will make it a reality for everybody to ditch the IT requirements BA. Instead of having to learn an interface specific to a piece of software, the AI will instead provide a sophisticated human interface that replicates what the BA does today. For the business it will be as if they are working with a BA but without the human cost. Certainly it will take time and money to develop this BA replacement AI but once it is developed it can easily be distributed and shared.

In summary, AI will be a great move forward for business but will negatively impact the Business Analyst market as we know it today. Business Analysts who focus on only IT requirements would be well advised to move into Product Owner roles or be involved with the AI development that is happening so that they can be an expert in the field.

VW a lesson in marketing versus regulations

By now you will be very aware of the VW diesel scandal where the software on the car detected when the car was being tested and controlled exhaust emissions to past the test.

Anyone that works in gathering requirements can easily see the problem here. There were two competing requirements Marketing and Regulatory and in the end the marketing side won out.

Big business is a game of cat and mouse. Laws are in place for a lot of things but for business the viewpoint of laws is the risk / cost of being caught and the benefit of not following the law. If the law is not enforced 100%, business will start to think of it as an optional law. There are numerous cases of settlements between car companies and the US government or consumers. The Titanic is a classic example of the law being met but the intent of the law being missed which was to have enough life rafts to save lives – the law had not been written in such a way as to force the life rafts to be enough to meet the number of passengers.

When gathering requirements for a solution, care must be taken to understand the implications of giving one set of requirements higher priority over another. Risk analysis is supposed to be done to ensure the VW, Titanic situation never occurs today. However profit is a powerful master and it will make people blind to that which is obvious.

Double check those requirements that fly in the face of morals to make sure you are not ignoring something that will later make you a headline.

 

The industrial revolution 2.0 – where Jane & John Doe programs make sense

If you ever saw pictures from the original industrial revolution (1790 – 1870) you would have seen machines producing goods that also required humans to keep them supplied with materials. In some cases it was dangerous work as the humans darted under the mechanism of the machine to keep it supplied. One wrong step and the human resource was injured or killed.

These machine in their own way were original pieces of programming. Basically the Steam Punk of code where the internal workings are completely visible. Humans basically made up the shortfall in what could not be replaced easily or affordably by machine.

Step forward into today and while the brass and iron has vanished we still have humans fulfilling the roles where machines have not caught up.

Amazon pickers is an example of the humans still meeting the need.

When do you ask does it make sense to replace the human programs (lets call them Jane & John Doe)?

NOTE: This article is a somewhat tongue in cheek consideration of the removal of humans from the workforce and is not meant to offend anyone who is worried about AI takeover.

Let’s first look at the benefit of our human Jane and John Doe programs:

1 – Easily programmed if task is not too complicated.

2 – Can be programmed by other existing programs.

3 – Adaptable interface – Buttons, levers, switches etc.. are not an issue.

4 – Can be replaced if failing.

5 – Low short term investment costs.

6 – Can be easily reprogrammed as tasks change.

7 – Multiple interface methods for programming – auditory, touch, visual.

 

The cons of Jane and John Doe:

1 – Program can leave of own accord requiring another program to be obtained.

2 – Program can be injured requiring maintenance costs to be paid even if another program replaces it.

3 – Not all programs are of equal ability which can cause quality issues.

4 – Limited amount of transactions per hour can be handled and there is risk of memory leakage if the task is too frequent or repetitive.

 

Now let us consider the attributes of the equation to determine when to replace the Jane and John Doe programs with actual computerized machines :

1 – Cost of your current Jane and John Does + cost to remove them from the role versus the cost of the computerized machine.

2 – Frequency of the transaction – more frequent or increasing frequency raises the number of Jane and John Does programs you require making a computerized alternative more attractive.

3 – Availability of Jane and John Does – if they are getting harder to find, their cost goes up.

4 – Complexity of the task – like point 3, if the complexity of the task is getting higher, the number of Jane and John Does that can do it get less, increasing their cost.

5 – Long term need for Jane and John Doe – if the task is not changing and going to be around a long time, programming a computerized alternative makes sense as the long term return can be seen.

6 – Reliability of the computerized alternatives or level of risk a single failure point can create. When you have a large human set of programs, there is a lot of redundancy built in if one fails. With a computerized machine, when it fails, there is no backup until it is repaired.

There are probably a multitude of other reasons to keep or replace Jane and John Doe. This article is just to make you think about it from a ROI point of view and how history repeats itself 200 years later.

To quote what the head of an IT operations once said to me back in the 1989 “As soon as the cost of the tape system comes down to being cheaper than the staff I will get rid of the operations staff.” By 1992 the operations staff were out of a job as a machine had replaced them – the cost had come down enough. Machines eventually get cheaper than their human counterparts.

 

When software kills due to incomplete requirements

If you are lucky, your software has not been responsible for the death of anyone to date. If you are unlucky then you know it.

When a analyst gathers requirements for a piece of software there is a tendency to focus on the happy path and ignore the surrounding paths that can lead to disaster. Unfortunately events can lead up to the identification of the missing requirements and sometimes death is a result.

To be fair, we humans can still kill ourselves without software such as with the mechanical loaded gun or the speeding car taking a bend too fast. However software seems to give people in some cases a false sense of security that does not exist. In other cases it can give them power to do something that should not have been possible if they were directly engaged with the physical which leads to disaster.

The article below refers to two cases where software enabled a pilot to do something they should not have been allowed to do with death being the end result.

Lessons from spaceship two’s crash

In the above article, the situation was different from my previous article about lack of tactile feedback. In both cases the pilots knew what they were doing, they just did it at the wrong time or too frequently for the specific vehicle to survive.

As an analyst, be it a system’s analyst or business analyst, it is not enough to think of just the happy path. Whenever you are gathering requirements you need to also think of what will keep us on the happy path. Whenever there is an interaction or a key data point, ask yourself if the event that causes this can be triggered at the wrong time or occur too many times.

Look for the ways that one can step off of the path and see if you can build either a metaphorical wall to keep us on the path or ways to get us back on the path before any damage is done.

Data handling – know when to bring the experts on board.

We all know about the Y2K incident with the 2 digit year however there are still examples of data storage length being inappropriate for the data to be stored.

If you are a Business Analyst that deals with data then it is important to always be questioning the data requirements to ensure that they meet the need of the business / application now and especially in the future.

Industries where data is critical to their function will probably leverage Data Modelers / Engineers / Scientists to manage data definition. As a BA we should not be afraid to state when the  data knowledge is beyond us and ask for the project to employ one of these specialists. Do not try and wing it because the end result can be expensive to the company.

To read up on some of the impacts of data, see this article below from the BBC:

Data Handling that led to disasters

Next Page »