Archive for August, 2006

A Smart Privacy Move by GSA: Arranging Credit Monitoring Agreements

Thursday, August 31st, 2006

The General Services Administration  (GSA) announced on Tuesday (8/29) that they are purchasing blanket credit monitoring services from three different companies at reduced rates to use when data privacy breaches occur.

"Washington DC – The U.S. General Services Administration awarded Blanket Purchase Agreements (BPAs) to assist Federal agencies in protecting the confidentiality of personal credit and payment information, as well as providing a fast and effective solution for Federal agencies needing commercial-off-the-shelf credit monitoring services.

The BPAs were awarded to Equifax, Inc. based in Atlanta, Ga., Experian Consumer Direct of Irvine, Calif., and Bearak Reports, a small, woman-owned firm in Framingham, Mass.

In the wake of recent incidents that threatened the confidentiality of personal information, this action by GSA will allow Federal agencies to take advantage of significantly reduced unit pricing and volume discounting available through these agreements.  They can also select different levels of credit monitoring services depending on the degree of vulnerability, risk, and protection.

The BPAs also eliminate separate contracting and open market costs that result from separate agencies searching for sources, developing technical documents and solicitations, and evaluating offers.  Significantly reduced pricing, strong oversight and reporting, and excellent customer service from these commercially available credit monitoring services are now available on a government-wide basis.   

The BPAs do not obligate funds.  There is no limit on the dollar value of task order purchases made under the BPA.  BPA vendor numbers are as follows:

GS-23F-06-E3-A-0013 Bearak Reports (Woman-Owned, Small)
GS-23F-06-E3-A-0014 Equifax Inc. (Large)
GS-23F-06-E3-A-0015 Experian Consumer Direct (Large)"

This is a good, pro-active move on the part of the government, particularly considering how they’ve mishandled recent and past privacy breaches. 

The Bearak Reports site is interesting.  I think it is a good thing to include a small boutique organization such as this in with two of the other big fish in this industry space.  Bearak Reports offers three different levels of identity theft policies; don’t know which of these the government cut the deal for, but most likely the one with the lowest coverage in consideration they also have the other two companies they can use.

I wonder what the "significantly reduced unit pricing and volume discounting" amounts are?  Seems this would be something the GSA would have to make available to the public.  Perhaps it’s just not posted out there yet…I couldn’t find it on the government sites.  Something to check on next week…

Will the government then fulfill the original promises for credit monitoring they made to the 26.5 million individuals involved with the VA incident that they later reneged upon when the laptop and hard drive were found months later? 

Will they use one one credit monitoring service per incident, or multiple?  Will it depend upon the type and complexity of the incident?

It would be interesting to know the parameters around which they will use these credit monitoring services:

  • Have they documented the types of incidents for which credit monitoring should be launched?
  • Have they documented how to determine the types and levels of credit monitoring?
  • Have they documented the specific types of personal information involved with the breach that would trigger credit monitoring?
  • Are they going to establish a certain number of indivudals that must be involved?  It would be too bad if the number of individuals would be a factor; isn’t it important to help prevent bad things from happening to each individual for whose information their incident impacted, and not just a group of a specific size?

Technorati Tags









AT&T Handles Hacker Theft of Personal Data Better Than Many Others Have

Wednesday, August 30th, 2006

Today Computerworld reported that hackers broke into AT&T’s systems over this past weekend.

"Malicious hackers broke into one of AT&T Inc.’s computer networks and stole credit card data and other personal information from several thousand customers who shopped at the telecommunication giant’s online store. AT&T said it was notifying "fewer than 19,000" customers whose data was accessed during the weekend break-in, which it said was detected within hours.  The company said it immediately shut down the online store, notified credit card companies and was working with law enforcement agencies to track down the hackers.  "We recognize that there is an active market for illegally obtained personal information," Priscilla Hill-Ardoin, AT&T’s chief privacy officer, said in a statement. "We will work closely with law enforcement to bring these data thieves to account," Hill-Ardoin said.  AT&T said it would also pay for credit monitoring services to assist in protecting the customers involved. The data theft involved people who had bought DSL equipment for high-speed Internet access.""

It is refreshing to see that AT&T is not trying to downplay the potential seriousness of the incident. 

Breach response actions they did right:

  • Notified the impacted individuals quickly.  They did not wait months to notify as most other companies have done in the past, such as Choicepoint and the Veteran’s Affairs agency.
  • Did not sugar-coat the potential impact of what could be done with the data.  They acknowledged that there are many fraudsters and criminals out there who make significantly large amounts of money selling personal information to other criminals.
  • Did not say that the information had not been misused.  Too many times companies try to shrug off the potential impact of the incident by saying that they do not believe stolen information had been used, or that there was no malicious intent by the unknown hacker, when in fact there is no way they could possibly know this.
  • Is paying for credit monitoring for the 19,000 impacted individuals.  Such credit monitoring, while not 100% effective, certainly will help the impacted individuals know if their data is being used for fraud in a large number of ways.  More…actually all…businesses must accept responsibility for their security incidents and step up and pay for this monitoring for impacted individuals instead of telling the individuals they must pay for it themselves.

I’ll be interested to see follow-up information on this incident, if there is any.

Technorati Tags








New Breach Prevention and Detection Study

Tuesday, August 29th, 2006

Yesterday the Ponemon Institute and PortAuthority Technologies released a new study, "National Survey on the Detection and Prevention of Data Breaches" that is an interesting read. 

Representatives from 853 U.S.-based organizations took a web-based survey for the following issues:

"1. How do information security practitioners respond to data breaches?
2. What technologies, practices and procedures are employed by organizations to detect and prevent data breaches?
3. What are the issues, challenges and possible impediments to effectively detecting and preventing data breaches?
4. How do organizations attempt to enforce compliance with its data protection policies?"

You can download the full report, which has much, much more informaton than the few stats I reference here, from the PortAuthority site.

Some findings that caught my eye…

Only 66% of the respondants use technology to detect and/or prevent data breaches. 

The following were the reasons given by the organizations who indicated they do *NOT* use technologies to detect and/or prevent data breaches:

35%  Technology-based solutions are too expensive.   
16%  Existing manual procedures are more than adequate for our company’s data breach detection and prevention activities.   
16%  Our company is not vulnerable to data breaches.   
12%  The false positive rate of existing technology-based solutions is too high.   
8%  Technology-based solutions are too difficult to implement.   
6%  Our company does not have the in-house expertise to utilize these solutions.   
5%  Existing technology-based solutions are not able to detect or prevent breaches with a high level of assurance.   
2%  Detecting breaches is not a priority for our company’s senior executive team.
   

In response to the question, “In addition to technology solutions, what other manual practices and procedures does your organization rely upon to detect and prevent data leaks?” the respondents indicated:

81%  Policies, including standard operating procedures (SOPs)   
71%  Close supervision and management of all data handling functions   
65%  Training and communication programs   
40%  Rigorous background checks for all employees who handle sensitive or confidential information   

30%  Independent audits   
29%  Self assessments by business or functional units
   

Some of these reasons are somewhat surprising, and some are quite risky for businesses.  Particularly relying upon manual practices alone. 

Manual practices are certainly necessary and all listed are very important, but manual procedures alone cannot detect and/or prevent all the many types of data breaches that have occurred at increasingly alarming rates.  Technology solutions are necessary to enhance and support the manual efforts.  How will manual procedures alone prevent keyloggers from being installed through covert methods and channels?  How will manual methods alone prevent an employee from accidentally sending personal information out through email within the message or as an attachment?  How will manual methods alone prevent a database of customer information from being accidentally posted to a website?  And so many other types of incidents that have actually occurred‚Ķmany times‚Ķthat have resulted in significant business impact and disruption of individuals‚Äô lives through identity fraud, theft and other cybercrime.

I believe strongly that the human factor is the weakest link in information security, but I also believe strongly that technology must also be used to enhance the human factor where training, policies, and the other human methods just cannot suffice on their own as dependable detection or prevention solutions.

In response to the question, “What technologies does your organization use to detect data breaches?” the participants gave the following responses:

39%  Content filtering technologies   
28%  Keyword monitors   
25%  Data leak detection and prevention   
23%  Intrusion detection systems (IDS)   
21%  Other   
15%  Packet sniffers   
9%   Digital rights management solutions
   

It surprised me that IDS was only used by 23%.  I know a growing number of organizations are using content filters and keyword scanners, but it is very surprising if they are actually used more than IDS and ‚ÄúOther‚Äù methods.  I wonder too about how the survey was worded.  Some of these terms are somewhat ambiguous, some subjectively redundant with some of the other terms, and depending on the respondent’s background they may not have answered in a way that most IT or Info Sec practitioners would have answered.  According to the survey 50% of the respondents had IT titles while the other 50% were from other ‚Äútitles,‚Äù so this may have impacted the results.

In response to the question, “What technologies does your organization use to prevent data breaches?” the respondents answered as follows:

41%  Identity and access management systems   
27%  VPN or other secure token-based networks   
22%  Encryption technologies   
16%  Keyword monitors   
15%  Intrusion detection systems (IDS)   
13%  Intrusion prevention systems (IPS)   
13%  Data leak detection and prevention   
11%  Content filtering technologies   
10%  Other
   
8%    Packet sniffers   
7%    Digital rights management solutions
   

Although the report indicated surprise at the identity and access management systems being at the top of the list, it really doesn‚Äôt surprise me that much.  Most organizations‚Ķprimarily business unit leaders‚Ķstill believe using passwords and network access controls is enough to prevent security incidents and breaches.  However, I hope there are actually MORE companies using identity and access management systems than just 41%.  Again, I’m not sure if this term was clearly defined within the survey or not, but if it wasn’t it is open to a very wide range of interpretations.

In response to the question, "Has anyone in your organization been fired, demoted or reprimanded as a result of leaking sensitive or confidential information outside the company?" the respondents indicated:

31%  Yes
46%  No
23%  Unsure

The way this question reads it sounds as though close to half of the respondents indicate that individuals who DID leak information, or cause data breaches, were not reprimanded.  And if the 23% that were unsure actually should have been ‚ÄúNo‚Äù answers, this would mean almost 70% of organizations do not apply any disciplinary action at all when they know people have broken policies or did something careless or malicious that resulted in a data breach.   This is very troubling, and also points to why depending upon manual methods of prevention and detection alone will not work; if policies are not enforced with disciplinary actions such as demotions, terminations, etc., then personnel will not be motivated to follow the policies or the other processes to protect data.  Do you think 70% of organizations really do not hold their personnel accountable and do not apply sanctions when they do things against policies?  Scary. 

In response to the question, “What is the primary reason why enforcement may not be effective?” the respondents answered:

29%  Enforcement is difficult because of the large number of methods to bypass the detection system.   
28%  Enforcement is difficult because of the large number of false positives generated by detection systems and manual procedures.   
16%  Enforcement solutions are too costly to implement.   
14%  Enforcement is difficult because management does not appear to be too concerned about compliance with our policy.   
8%  Enforcement is difficult because of our inability to detect data leaks in a timely fashion.   
4%  Other
   

Interesting.  Recall 34% did not use technology to detect or prevent a breach, so there is no technology-based detection system to bypass for the top two highest percentages.  It would have been interesting to know what is meant by it being too costly to implement ‚Äúenforcement solutions.‚Äù  I wish they had described what they meant by this.

Management concern can certainly be raised by making them accountable for bad things and  applying sanctions to them when breaches happen that could have been prevented.

Detecting data breaches in a timely manner must be done using a combination of methods, starting with documenting where all your personal information is located and keeping your data inventory up-to-date.

There are many more findings and statistics within the report, but this gives you an idea of what types of issues were covered. 

How does your organization compare with the survey results?

Technorati Tags








Crime Really Doesn’t Pay: Computer Criminals Sentenced to Prison for Copyright Infringement and Computer Attacks

Monday, August 28th, 2006

The arm of the law *can* be long when it comes to nabbing computer criminals and sending them to jail for their crimes.  A few examples of how computer crime does not pay when you’re caught were reported in the past few days.

Example 1:

The U.S. Department of Justice reported, "Operator of Massive For-Profit Software Piracy Website Sentenced to 6 Yrs; Defendant Made Over $4.1 Million in Illegal Revenue."  The criminal must also

"pay restitution of more than $4.1 million, and perform 50 hours of community service. The ordered forfeiture involves a wide array of assets, including numerous airplanes, a helicopter, boats, and cars, which Ferrer had purchased with the profits from his illegal enterprise. In particular, Ferrer forfeited a Cessna 152; a Cessna 172RG; a Model TS-11 ISKRA aircraft; a RotorWay International helicopter; a 1992 Lamborghini; a 2005 Hummer; a 2002 Chevrolet Corvette; two 2005 Chevrolet Corvettes; a 2005 Lincoln Navigator; an IGATE G500 LE Flight Simulator; a 1984 twenty-eight-foot Marinette hardtop express boat; and an ambulance. Ferrer has also agreed to surrender the proceeds of sales of two fire trucks that were also bought with his illegal proceeds."

It’s amazing how many people make guys like this multi-millionaires. 

"Beginning in late 2002 and continuing until its shutdown by the FBI on Oct. 19, 2005, Ferrer and his co-conspirators operated the www.BUYSUSA.com website, which sold copies of software products that were copyrighted by companies such as Adobe Systems Inc., Autodesk, and Macromedia Inc. at prices substantially below the suggested retail price. The software products purchased on the website were reproduced on CDs and distributed through the mail. The operation included a serial number that allowed the purchaser to activate and use the product. Further investigation established that, during the time of its operation, www.BUYSUSA.com illegally sold more than $4.1 million of copyrighted software. These sales resulted in losses to the owners of the underlying copyrighted products of nearly $20 million.  After receiving complaints from copyright holders about Ferrer’s website, an undercover FBI agent made a number of purchases of business and utility software from the site, which were delivered by mail to addresses in the Eastern District of Virginia. Ferrer pleaded guilty before Judge Ellis on June 15, 2006, to one count of conspiracy and one count of criminal copyright infringement for selling pirated software through the mail."

Back in the early 1990’s I did some software audits at some companies, one of which had only one legitimate software package amongst around 500 different software packages.  Many of the other companies I reviewed were not that much better.  That was at a time when illegal software copying was running rampant among many (perhaps most) businesses who were trying to save money and didn’t really understand the concept of software licensing and copyright compliance.  At that time the Software Publisher’s Association (SPA) was becoming very active in fighting illegal software licensing. 

The Business Software Alliance (BSA) was key to the investigation in this particular case.  I wonder how many businesses were customers of this guy?  I believe most businesses now try to be compliant with software licensing requirements, but I can see how small and medium sized businesses (SMBs) could have purchased from this guy in an effort to try and save money not realizing they were purchasing illegal software.  Wonder how long it will take the BSA to contact all his customers to tell them to destroy their software and purchase legitimate copies?

Example 2:

The New Jersey Attorney General’s office reported, "Michigan Man Gets 30 Months for Conspiracy to Order Destructive Computer Attacks on Business Competitors" on August 25, 2006.  His co-conspirator got sentenced to 5 years in prison. 

"U.S. District Judge Joseph E. Irenas also ordered Jason Salah Arabo, 19, of Southfield, Michigan, to make restitution of $504,495 to his victims ‚Äì the websites he targeted as well as an Internet hosting company.  Arabo pleaded guilty today before Judge Irenas on April 12, to a one-count Information charging him with conspiracy to cause the transmission of a program, information, code, and command, and as a result of such conduct, intentionally cause damage without authorization, to a protected computer. In pleading guilty, Arabo acknowledged that in 2004, he ran two web-based companies, www.customleader.com and www.jerseydomain.com, that sold sports apparel, including reproductions of sports uniforms, popularly known as ‚Äúretro‚Äù or ‚Äúthrowback‚Äù jerseys."

Arabo was 16 when these attacks occurred.  So young…where were the people who should have been modeling good ethical behavior for him?  This really points out a need to incorporate information security and ethical computer use within our school systems as well as within our homes whenever and however possible.  This is not a new issue; in 1994, a National Computer Ethics and Responsibilities Campaign (NCERC) was launched  to create an "electronic repository of information resources, training materials and sample ethics codes" that would be available on the Internet for IS managers and educators.  The National Computer Security Association (NCSA) and the Computer Ethics Institute co-sponsored NCERC. The NCERC Guide to Computer Ethics was developed to support the campaign.  However, it appears now that it is only available via hard copy by postal mail request.  Too bad it is not available online to make it easier to raise awareness and educate everyone about computer use and the ethical impacts.

"According to Assistant U.S. Attorney Eric H. Jaso, who prosecuted the case, Arabo admitted that in online ‚Äúinstant message‚Äù conversations he met a New Jersey resident, Jasmine Singh, who communicated using the online name ‚ÄúPherk.‚Äù Arabo learned that Singh had covertly infected some two thousand personal computers with programs that enabled him to remotely control them. Singh demonstrated to Arabo online that he could command these computers to conduct attacks, known as distributed denial of service, or ‚ÄúDDOS‚Äù attacks, on computer servers and disable websites supported by those servers. Arabo admitted that he asked Singh to take down the websites and online sales operations of certain of his competitors. Arabo promised to compensate Singh for the attacks with merchandise, including designer sneakers.  In August 2005 Singh, who was 16 at the time of the attacks, pleaded guilty as an adult to two counts of computer theft in New Jersey State Superior court. He has since been sentenced to five years in prison and ordered to pay $35,000 in restitution for damage caused by the attacks."

Example 3:

On August 25, 2006 Christopher Maxwell, 21, of Vacaville, California was sentenced to three years in prison "for launching a computer attack that hit tens of thousands of computers, including some belonging to the Department of Defense, a Seattle hospital and a California school district.  Maxwell was also sentenced to three years of supervised release. He pleaded guilty in May to federal charges of conspiracy to intentionally cause damage to a protected computer and conspiracy to commit computer fraud.  U.S. District Judge Marsha J. Pechman said the crime showed "incredible self-centeredness" with little regard for the impact on others. She said the prison time was needed as "deterrence for all those youth out there who are squirreled away in their basements hacking.""

"Maxwell and two juvenile co-conspirators were accused of using "botnet" attacks — programs that let hackers infect and control a computer network — to install unwanted internet advertising software, a job that earned them about $100,000.  Three victims testified at Maxwell’s sentencing: a representative of Seattle’s Northwest Hospital, damaged in February 2005; a representative of the U.S. Defense Department, which reported damage to hundreds of computers worldwide in 2004 and 2005; and a former system administrator for the Colton Unified School District in California, where more than 1,000 computers were damaged over several months in 2005."

The US Dept of Justice site indicates the estimated dollar loss of the businesses victims of Maxwell’s crimes were $252,000.  I imagine based upon the length of time over whih the attacks occurred and the number of computers impacted it was likely much more costly when factoring in human hours of lost time, time to hire someone to clean up the damage, and the cost of legal counsel.

The press release from May when the charges were made indicated "Further investigation revealed MAXWELL’s computer intrusions also did more than $135,000 of damage to military computers in the United States and overseas."

Causing downtime and computer problems at hospitals could also have a very real threat to patient health.

These are just three examples, but good representations of the need for a strong information security program that addresses the risks for, and within, each particular organization.  Security must be applied not only to meet the compliance requirements of laws and regulations, but in ways that address the existing threats, risks and vulnerabilities of each organization’s unique environment.   

Technorati Tags







Identity Theft and Awareness

Sunday, August 27th, 2006

Identity theft is reported often in the press.  If you haven’t read the news reports lately, just check the Privacy Rights Clearinghouse site to see all the different types of privacy breaches that occur that could ultimately lead to identity theft.

It is a great awareness raising activity to let your personnel know the impact that identity theft can have upon them.  They can then better understand the impact of, and empathize with, identity theft and how it impacts your customers, consumers and all employees.  A very good book to provide to all personnel to help them understand this impact is "Safeguard Your Identity: Protect Yourself  With A Personal Privacy Audit" by Mari Frank

Identity theft is also the theme of this year’s Global Security Week (GSW), the week of September 4.  The GSW web site contains some great ideas about activities to do within your organization to raise awareness.

Technorati Tags







Data-mining, Oversight and Privacy

Saturday, August 26th, 2006

TechWorld published an interesting and thought-provoking article about data mining today pointing out some of the potential benefits of data mining, but also some of the problems when there is lack of oversight. 

"Data mining is a relatively new field within computer science. In the broadest sense, it combines statistical models, powerful processors, and artificial intelligence to find and retrieve valuable information that might otherwise remain buried inside vast volumes of data. Retailers use it to predict consumer buying patterns, and credit card companies use it to detect fraud. In the aftermath of September 11, the government concluded that data mining could help it prevent future terrorist attacks."

In 2004 a Government Accountability Office (GAO) report found that US federal agencies were actively engaged in or planning 199 data mining projects, with 122 of them involving personal information.  A 2005 GAO report indicated that there were significant concerns about the lack of following oversight procedures and implementing the recommended (possibly meant to be required) privacy and information security procedures for the data minig initiatives. 

A disturbing loophole in the directive covering data mining is nicely summarized in this statement, "While the federal laws and guidance previously outlined provide a wide range of privacy protections, agencies are allowed to claim exemptions from some of these provisions if the records are used for certain purposes."  It sounds as though a large number of agencies claim such exemptions.

The GAO report included the following steps the GAO had recommended to protect privacy.

Table 1: Key Steps Agencies Are Required to Take to Protect Privacy, with Examples of Related Detailed Procedures and Sources
Source: GAO analysis of the Privacy Act, E-Government Act, FISMA, and related guidance.
Key steps to protect privacy of personal information Examples of procedures

1.  Publish notice in the Federal Register when creating or modifying system of records
• Specify the routine uses for the system
• Identify the individual responsible for the system
• Outline procedures individuals can use to gain access to their records

2.  Provide individuals with access to their records
• Permit individuals to review records about themselves
• Permit individuals to request corrections to their records

3.  Notify individuals of the purpose and authority for the requested information when it is collected
• Notify individuals of the authority that authorized the agency to collect the information
• Notify individuals of the principal purposes for which the information is to be used

4.  Implement guidance on system security and data quality
• Perform a risk assessment to determine the information system vulnerabilities, identify threats, and develop countermeasures to those threats
• Have the system certified and accredited by management
• Ensure the accuracy, relevance, timeliness, and completeness of information

5.  Conduct a privacy impact assessment
• Describe and analyze how information is secured
• Describe and analyze intended use of information
• Have assessment reviewed by chief information officer or equivalent
• Make assessment publicly available, if practicable

All good recommendations.  I wonder, which of the government agencies read, let alone implement, GAO reommendations?  What percentage claim exemptions?  As the TechWorld report noted:

"Most data mining projects are not subjected to a rigorous business case analysis. Two current intelligence CIOs who were otherwise unable to comment for this story agreed that this is an issue that they struggle with. The US DoD’s Technology and Privacy Advisory Committee (TAPAC) developed a 10-point system of checks and balances that it recommended every agency head apply to data mining projects, but Cate says that it has never been implemented. Similarly, the US National Academy of Sciences recently appointed a committee to develop a methodology that the government can use to evaluate the efficacy of its antiterror data mining projects, but the target date for its report is still more than a year away."

I believe, based upon what I’ve heard from colleagues, clients and other info sec and privacy professionals at meetings and conferences that the use of data mining is going to increase exponentially in the next few years.  As widely evidenced by the NSA’s data mining of phone records, and also by the growing data mining of public socializing sites, such as described within the January 2006 CRS U.S. government report, "Data Mining and Homeland Security: An Overview."  A couple of snippets to give you a feel for the data mining issues described within report:

"Data mining has become one of the key features of many homeland security initiatives. Often used as a means for detecting fraud, assessing risk, and product retailing, data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. In the context of homeland security, data mining can be a potential means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records."

"As with other aspects of data mining, while technological capabilities are important, there are other implementation and oversight issues that can influence the success of a project’s outcome. One issue is data quality, which refers to the accuracy and completeness of the data being analyzed. A second issue is the interoperability of the data mining software and databases being used by different agencies. A third issue is mission creep, or the use of data for purposes other than for which the data were originally collected. A fourth issue is privacy. Questions that may be considered include the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and possible application of the Privacy Act to these initiatives. It is anticipated that congressional oversight of data mining projects will grow as data mining efforts continue to evolve."

"As additional information sharing and data mining initiatives have been announced, increased attention has focused on the implications for privacy.  Concerns about privacy focus both on actual projects proposed, as well as concerns about the potential for data mining applications to be expanded beyond their original purposes (mission creep). For example, some experts suggest that anti-terrorism data mining applications might also be useful for combating other types of crime as well. So far there has been little consensus about how data mining should be carried out, with several competing points of view being debated. Some observers contend that tradeoffs may need to be made regarding privacy to ensure security. Other observers suggest that existing laws and regulations regarding privacy protections are adequate, and that these initiatives do not pose any threats to privacy. Still other observers argue that not enough is known about how data mining projects will be carried out, and that greater oversight is needed. There is also some disagreement over how privacy concerns should be addressed. Some observers suggest that technical solutions are adequate. In contrast, some privacy advocates argue in favor of creating clearer policies and exercising stronger oversight. As data mining efforts move forward, Congress may consider a variety of questions including, the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and the possible application of the Privacy Act to these initiatives."

Data mining is nothing new…it’s been used in one way or another since the advent of the "super computer."  The differentiators from around 25+ years ago to now are the 1) increasing connectivity of multiple repositories of data and multiple computers…computer grids with seemingly unlimitless data storage and containing what is moving to be unlimited amounts of personal information; and 2) the increasing speed and capabilities of the technology to cull through the data in a blink of an eye to find and correlate personal data.

"With great power comes great responsibility."  I use this Spiderman quote often…I think it applies to so many challenges that information security and privacy practitioners face…technology power and related responsibility really do make our professions interesting, important and often infuriating.  Data mining is powerful and that power must be contained.  You don’t want a data mining effort to turn into an out-of-control privacy destroying Doc Oc monstrosity.

Data mining does not have to invade privacy with proper oversight, established accountability, and enforced procedures.  Without these ingredients, however, privacy gets trampled and runs amuck.  There have been any incidents resulting from data mining results that were bad, and misuse of the data.  The discussion of these incidents is a good topic…for another time.

Does your organization have data mining initiatives going, or planned?  Be sure you are addressing information security and privacy issues…from the start of the projects and all the way through until the data mining effort is retired…if it ever is.  Remember:

1.  Your organization risks violating your own privacy policies and agreements when you link the consumer and customer data you collect to carry out different customer-facing processes, and subsequently amass them in different databases.
2.  When your organization analyzes web site data and then links the findings with data acquired from other applications or third-party data providers in order to develop lists targeting specific consumers, you are running a high risk of being in noncompliance with your own policies, contracts and applicable laws.  This is particularly true for your non-U.S. customers/consumers.
3.  Does your organization use the data within your data mining initiatives for other purposes outside the scope of your intended and communicated use?  You run a high risk of regulatory noncompliance and potential lawsuits if you do this.
4.  Incorporate information security and privacy requirements and checks throughout your entire systems and applications development life cycle.  Document them.
5.  Document and communicate information security and privacy policies, procedures and standards for data mining projects, initiatives, applications and systems.  This demonstrates due diligence in addition to complying with several data protection laws.
6.  Learn from the mistakes and recommendations of others.  Read the GAO reports covering data mining and implement the recommendations that you could apply within your organization.  This demonstrates due diligence particularly in the eyes of regulatory auditors.
7.  Conduct privacy impact assessments.  Do them while planning the data mining initiative; following implementation; and regularly thereafter.

Technorati Tags






Data-mining, Oversight and Privacy

Saturday, August 26th, 2006

TechWorld published an interesting and thought-provoking article about data mining today pointing out some of the potential benefits of data mining, but also some of the problems when there is lack of oversight. 

"Data mining is a relatively new field within computer science. In the broadest sense, it combines statistical models, powerful processors, and artificial intelligence to find and retrieve valuable information that might otherwise remain buried inside vast volumes of data. Retailers use it to predict consumer buying patterns, and credit card companies use it to detect fraud. In the aftermath of September 11, the government concluded that data mining could help it prevent future terrorist attacks."

In 2004 a Government Accountability Office (GAO) report found that US federal agencies were actively engaged in or planning 199 data mining projects, with 122 of them involving personal information.  A 2005 GAO report indicated that there were significant concerns about the lack of following oversight procedures and implementing the recommended (possibly meant to be required) privacy and information security procedures for the data minig initiatives. 

A disturbing loophole in the directive covering data mining is nicely summarized in this statement, "While the federal laws and guidance previously outlined provide a wide range of privacy protections, agencies are allowed to claim exemptions from some of these provisions if the records are used for certain purposes."  It sounds as though a large number of agencies claim such exemptions.

The GAO report included the following steps the GAO had recommended to protect privacy.

Table 1: Key Steps Agencies Are Required to Take to Protect Privacy, with Examples of Related Detailed Procedures and Sources
Source: GAO analysis of the Privacy Act, E-Government Act, FISMA, and related guidance.
Key steps to protect privacy of personal information Examples of procedures

1.  Publish notice in the Federal Register when creating or modifying system of records
• Specify the routine uses for the system
• Identify the individual responsible for the system
• Outline procedures individuals can use to gain access to their records

2.  Provide individuals with access to their records
• Permit individuals to review records about themselves
• Permit individuals to request corrections to their records

3.  Notify individuals of the purpose and authority for the requested information when it is collected
• Notify individuals of the authority that authorized the agency to collect the information
• Notify individuals of the principal purposes for which the information is to be used

4.  Implement guidance on system security and data quality
• Perform a risk assessment to determine the information system vulnerabilities, identify threats, and develop countermeasures to those threats
• Have the system certified and accredited by management
• Ensure the accuracy, relevance, timeliness, and completeness of information

5.  Conduct a privacy impact assessment
• Describe and analyze how information is secured
• Describe and analyze intended use of information
• Have assessment reviewed by chief information officer or equivalent
• Make assessment publicly available, if practicable

All good recommendations.  I wonder, which of the government agencies read, let alone implement, GAO reommendations?  What percentage claim exemptions?  As the TechWorld report noted:

"Most data mining projects are not subjected to a rigorous business case analysis. Two current intelligence CIOs who were otherwise unable to comment for this story agreed that this is an issue that they struggle with. The US DoD’s Technology and Privacy Advisory Committee (TAPAC) developed a 10-point system of checks and balances that it recommended every agency head apply to data mining projects, but Cate says that it has never been implemented. Similarly, the US National Academy of Sciences recently appointed a committee to develop a methodology that the government can use to evaluate the efficacy of its antiterror data mining projects, but the target date for its report is still more than a year away."

I believe, based upon what I’ve heard from colleagues, clients and other info sec and privacy professionals at meetings and conferences that the use of data mining is going to increase exponentially in the next few years.  As widely evidenced by the NSA’s data mining of phone records, and also by the growing data mining of public socializing sites, such as described within the January 2006 CRS U.S. government report, "Data Mining and Homeland Security: An Overview."  A couple of snippets to give you a feel for the data mining issues described within report:

"Data mining has become one of the key features of many homeland security initiatives. Often used as a means for detecting fraud, assessing risk, and product retailing, data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. In the context of homeland security, data mining can be a potential means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records."

"As with other aspects of data mining, while technological capabilities are important, there are other implementation and oversight issues that can influence the success of a project’s outcome. One issue is data quality, which refers to the accuracy and completeness of the data being analyzed. A second issue is the interoperability of the data mining software and databases being used by different agencies. A third issue is mission creep, or the use of data for purposes other than for which the data were originally collected. A fourth issue is privacy. Questions that may be considered include the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and possible application of the Privacy Act to these initiatives. It is anticipated that congressional oversight of data mining projects will grow as data mining efforts continue to evolve."

"As additional information sharing and data mining initiatives have been announced, increased attention has focused on the implications for privacy.  Concerns about privacy focus both on actual projects proposed, as well as concerns about the potential for data mining applications to be expanded beyond their original purposes (mission creep). For example, some experts suggest that anti-terrorism data mining applications might also be useful for combating other types of crime as well. So far there has been little consensus about how data mining should be carried out, with several competing points of view being debated. Some observers contend that tradeoffs may need to be made regarding privacy to ensure security. Other observers suggest that existing laws and regulations regarding privacy protections are adequate, and that these initiatives do not pose any threats to privacy. Still other observers argue that not enough is known about how data mining projects will be carried out, and that greater oversight is needed. There is also some disagreement over how privacy concerns should be addressed. Some observers suggest that technical solutions are adequate. In contrast, some privacy advocates argue in favor of creating clearer policies and exercising stronger oversight. As data mining efforts move forward, Congress may consider a variety of questions including, the degree to which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those for which they were originally designed, and the possible application of the Privacy Act to these initiatives."

Data mining is nothing new…it’s been used in one way or another since the advent of the "super computer."  The differentiators from around 25+ years ago to now are the 1) increasing connectivity of multiple repositories of data and multiple computers…computer grids with seemingly unlimitless data storage and containing what is moving to be unlimited amounts of personal information; and 2) the increasing speed and capabilities of the technology to cull through the data in a blink of an eye to find and correlate personal data.

"With great power comes great responsibility."  I use this Spiderman quote often…I think it applies to so many challenges that information security and privacy practitioners face…technology power and related responsibility really do make our professions interesting, important and often infuriating.  Data mining is powerful and that power must be contained.  You don’t want a data mining effort to turn into an out-of-control privacy destroying Doc Oc monstrosity.

Data mining does not have to invade privacy with proper oversight, established accountability, and enforced procedures.  Without these ingredients, however, privacy gets trampled and runs amuck.  There have been any incidents resulting from data mining results that were bad, and misuse of the data.  The discussion of these incidents is a good topic…for another time.

Does your organization have data mining initiatives going, or planned?  Be sure you are addressing information security and privacy issues…from the start of the projects and all the way through until the data mining effort is retired…if it ever is.  Remember:

1.  Your organization risks violating your own privacy policies and agreements when you link the consumer and customer data you collect to carry out different customer-facing processes, and subsequently amass them in different databases.
2.  When your organization analyzes web site data and then links the findings with data acquired from other applications or third-party data providers in order to develop lists targeting specific consumers, you are running a high risk of being in noncompliance with your own policies, contracts and applicable laws.  This is particularly true for your non-U.S. customers/consumers.
3.  Does your organization use the data within your data mining initiatives for other purposes outside the scope of your intended and communicated use?  You run a high risk of regulatory noncompliance and potential lawsuits if you do this.
4.  Incorporate information security and privacy requirements and checks throughout your entire systems and applications development life cycle.  Document them.
5.  Document and communicate information security and privacy policies, procedures and standards for data mining projects, initiatives, applications and systems.  This demonstrates due diligence in addition to complying with several data protection laws.
6.  Learn from the mistakes and recommendations of others.  Read the GAO reports covering data mining and implement the recommendations that you could apply within your organization.  This demonstrates due diligence particularly in the eyes of regulatory auditors.
7.  Conduct privacy impact assessments.  Do them while planning the data mining initiative; following implementation; and regularly thereafter.

Technorati Tags






Insider Threat & HIPAA: Computers Containing “Thousands” of Patient Files Stolen

Thursday, August 24th, 2006

10 computers containing personal information on thousands of patients from a Hospital Corporation of America (HCA) regional office, and now the FBI is investigating.  The report did not tell when the theft occurred, though.

"The computers were stolen from a secure building, and the thieves slipped by video surveillance. HCA is one of the nation’s leading providers of health care services. The company’s 200 plus hospitals and surgical centers serve thousands of patients in the US and around the world. The company is warning patients, and the FBI is now involved.

‚ÄúFor now investigators aren’t saying which regional office was targeted by thieves, but the the stolen computers contain sensitive information — including social security numbers and thousands of files on Medicare and Medicaid patients treated at HCA hospitals.‚Äù

The theft affects patients on Medicare or Medicaid who have failed to pay their co-pay or deductible, and those who were seen in an HCA hospital in Colorado, Kansas, Louisiana, Mississippi, Oklahoma, Oregon, Texas or Washington between 1996 and 2006. HCA did not believe any of the files stolen belonged to patients in Tennessee.

The theft has sent shockwaves through the system of the Nashville-based company raising concerns about security. Now a special call center has been set up to answer questions for concerned patients. Investigators thought the thieves stole the computer hardware to sell, and had no interest in using the information for identity theft.

So far there have been no leads on the thieves, and no arrests. The original location of the computers has not been disclosed, and will not be while the FBI investigates. The thieves got past some elaborate security, including a keypad lock and a password for access, making it possible that it was an inside job. With this in mind, HCA has taken steps to further beef up security."

A few thoughts about this incident…

  • Even though patient information was stolen from a healthcare provider (a HIPAA defined "covered entity") it is unlikely there will be any HIPAA violations declared.  They had what sounds like reasonable physical security in place.
  • From the report it certainly does sound very likely it was an inside job…considering video surveillance was bypassed, along with the keypad lock and password.  Organizations must always remember that some "trusted" insiders will turn out to be threats and possibly commit crime through their authorized capabilities.
  • It is good the hospital contacted all the patients involved, in addition to setting up a special call center to answer questions.
  • It is odd/interesting that the investigators, without (supposedly) knowing who the thieves were, would say they "had no interesting in using the information for identity theft."  How could such a thing be known?  They must have much more information about this incident/theft than was reported.  No one can know the intent of an unknown person or persons.

Technorati Tags










Security Needed During Applications Development: Social Security Numbers Part Of Addresses on 7,601 Envelopes

Wednesday, August 23rd, 2006

7,601 people from Columbus Ohio had their social security numbers visible as part of the address on mailings they received from the city’s income tax department.

"More than 7,500 people received letters from the city’s income tax division with their Social Security numbers visible through the envelope window, a problem blamed on a computer glitch. No recipients have reported problems with identity theft, and the numbers will not be visible from the outside on future mailings, city tax administrator Melinda Frank said. Social Security numbers serve as city taxpayers’ account numbers and are included in mailings for identification."

Gee, good to know they will not continue to mail letters with visible SSNs!

Why wasn’t this noticed before the mailinges left the government office?  They should have had QA procedures for this. 

This is also a good example for the need to incorporate information security and privacy requirements and checks into the applications development process.  The inappropriate placement of the SSNs on the printouts that were subsequently stuffed into the envelopes should have been something checked during the application testing and quality assurance.  Blaming a "computer glitch" certainly is a weak effort to offload responsibility onto technology as though it was beyond their control.  Humans program computers, and humans are ultimately responsible for the applications flubs that result…not some mysterious and uncontrollable computer troll.

"The 7,601 mailings were sent Aug. 4 to alert people who had filed tax estimates for this year that they could pay their balances online. Followed by one or two additional characters, it wasn’t obvious that the nine-digit numbers were Social Security numbers, Frank said. "To their next door neighbor who doesn’t know what their Social Security number is, it’s a line of numbers with an alpha letter after it," she said."

Making what comes across as a flippant remark is not a good way to respond to an incident.  Most people in the U.S. *could* identify a SSN followed by "one or two additional characters" as being an SSN…especially on an envelope with a return address from the city income tax division. 

Make sure when you create your own incident response plans that your communications to the press and directly to the victims are not flippant, dismissive or condescending to the victims and readers.  This spokesperson comes across as basically saying that most people are too dumb to know a SSN when they see it.  This fans the flames of anger for those impacted by the incident.  Your communications should instead be compassionate, apologetic, truthful and show concern.

"The tax division received three complaints by phone and two by e-mail. "Yes, the nine digits are followed by a letter, but it’s not that hard to look at it and figure out that it might be a Social Security number," one taxpayer wrote. "You would think that in this day of ID theft, the last thing a taxing authority would want to do is expose all their taxpayers to identity theft and open the city up to being sued.""

The numbers of concerned victims who come forward voicing their concerns will continue to grow.  Don’t underestimate the impact their concern and anger over an incident could have on your organization.  This quote summed it up nicely…"the last thing a taxing authority would want to do is expose all their taxpayers to identity theft and open the city up to being sued."  No organization wants to be sued for something that could have easily been prevented with good information security and privacy practices built into their SDLC process, and with basic QA procedures.

This is another good example to put within your awareness files.

Technorati Tags









Free Identity Theft Seminar: Theme of Upcoming Global Security Week

Tuesday, August 22nd, 2006

The theme of Global Security Week, coming soon, during the week of September 4, is Identity Theft.  If you have a chance to attend, there looks to be a very interesting and enlightening *free* identity theft seminar in Ireland, sponsored by BH Consulting.

"Seminar
Global Security Week Ireland will host a seminar on the dangers posed by Identity Theft to both consumers and businesses alike.  Identify theft is one of the biggest criminal growth areas and impacts people across all spectrums of life.   Highlighting the dangers posed by Identity Theft and how individuals and companies can best prevent becoming victims of this crime will be speakers from the following organisations;

The Garda Bureau of Fraud Investigation
ENISA (the European Network and Information Security Agency)
The Companies Registration Office

A panel discussion will follow, whereby the speakers will answer questions from the audience.

The agenda for the day is ;

14:00 – 14:15     Introduction                                                                – Mathieu Gorge
14:15 – 15:15     Identity Theft in the Real World                                      – Garda Bureau of Fraud Investigation
                        Protecting your online identity                                        – Garda Computer Crime Unit
15:15 – 15:30     Coffee
15:30 – 16:00     Overview of Awareness Programmes in the EU:
                        The First Line of Defence from Identity Theft                    – ENISA (the European Network and Information Security Agency)
16:00 – 16:30     Protecting your Company’s identity
                        ‚Äì including demonstration of CORE                                   – Companies Registration Office
16:30 – 16:50     Panel Discussion ‚Äì How Best to Combat Identity Theft         – Panel chaired by Brian Honan
16:50 – 17:00     Close                                                                          – Brian Honan

The seminar will be hosted in the Burlington Hotel on Thursday the 7th of September from 2:00 p.m.  Registration is open to anyone concerned with Identity Theft."

If you won’t be in the area, think about attending something similar close to you.  Or, better yet for your organization, consider providing a 1/2- to full day seminar about identity theft at your organization during Global Security Week.  You can invite speakers from your local Infragard, ISACA, ISSA, or other information security professional organization and they can likely find some great speakers for your event. 

Also, if you are in the U.S., check with your state’s Attorney General office and invite him or her to come speak…they are often very willing to speak on the topic; I know the Iowa Attorney General’s office is very good with doing such events.

Technorati Tags