Why Rio 2016 is both a problem and an opportunity for your data project


The data visualisations that we all see online and on social media are making us more open to using this kind of information at work.

Our daily lives are now drenched in data, delivered to us from our televisions, our computers, and the smartphones in our hands. Charts and tables are commonplace, but the online world and social media have also made infographics a powerful tool for the presentation of this information, especially when coupled with images, animations, video clips and written commentary. This data can come from a huge number and variety of sources, brought in from places all around the world.

A perfect example of this is what we have seen in the coverage of the Rio Olympics. Online news providers have taken great leaps of the imagination in how data can be delivered to us, harnessing the tools at their disposal. The Guardian is a prime example, with their coverage of Great Britain’s cycling success over Australia that saw Sir Bradley Wiggins win his fifth Olympic gold medal (1). The newspaper created an animated version of the race that showed what happened every second of the way, with the reader being able to click through each stage at their own pace.

Not to be left trailing in second place, The New York Times has drawn up an extensive set of data visualisations that shows exactly how well each country has done at each Olympics since the games began (2). The graphics are a brilliant mixture of aesthetics and information, delivering a huge amount of complicated data at a glance.

These high-tech ways of accessing data are becoming everyday experiences for many people, but how does this affect businesses beyond the mass media outlets, and should companies strive to access and make use of these new tools?

There is a danger that if some data analytics projects are at a fairly embryonic stage they could seem outdated by the time they’re implemented. After all, with online trends changing day-by-day, what seemed like a great idea just a few months ago could be old-fashioned by now.

This poses problems for business who are then pushed to keep up with the latest innovations but don’t want to shake-up their operations. However, there is a good chance that your staff are using better, newer technology at home than they have access to when at work.

There are echoes of the BYOD (Bring Your Own Device) phenomenon, where the smartphones that people were buying with their own money were far more advanced than the ones they were being given by their employers. BYOD was a clever way of working around this without companies having to regularly shell out for new phones.

Now your staff will be using the data analytics power of social media such as Twitter and LinkedIn in their personal lives, along with gathering data on themselves with mobile apps such as Run Keeper. It is commonplace to have data at our fingertips, and people will be happy to use equivalent tools at work.

It would be very easy for anyone in the position of running a company that is making use of data visualisations to look at the sort of tools that are being used elsewhere and become despondent at what they have at their disposal. Adopting new technologies can be expensive, and many employers could worry that the changes this can bring to a workplace could have a negative effect.

But what needs to be remembered is that any new data technology that is brought in will probably not be unfamiliar to your colleagues, and may be something they are already using in their everyday lives. With so much technology, and so much data, now available to each and every one of us, that new piece of software that you’re apprehensive about buying may not have the disruptive effect on your team’s way of working that you think, and could give a massive boost to your profit margins.

What’s also very important to remember is that data visualisations are only the endgame of a very long process, one that begins with gathering good quality data itself. While new tools designed to present this data are emerging all the time, the basic foundation that they build upon is information. And if you’re looking at new ways of visualising data then you probably have a good bedrock of this information at your disposal already.

There’s an opportunity here, a massive one, that could see your company pushing itself to the forefront of the way in which data is presented and getting everyone involved in its use, not just data scientists and your IT team.

What’s needed to make the most of this is the realisation that the apps, websites and social media that your staff are using on a daily basis are indicative of a wider acceptance among them of how data now works in our world, and how it touches every aspect of our lives. People are now comfortable with digesting huge amounts of information, and even expect it to be delivered to them. If they do this in their own time, they’ll have no trouble doing it at work.

 

 

Image provided courtesy of Ian Burt

Are You on the Data Offensive or Defence?


Understanding the different types of data positions – data offensive or data defence.

 

Companies are either on the data offensive or data defence – and organisations need to move to being on the offensive to actively take hold of data and make tangible use of it.

There is a huge amount of data that any company will gather over time. This can be deliberate, and be something that you have set out to obtain, or it can be something that simply gathers as a result of the IT systems that we all use.

There are two ways a business can approach this data, and it’s a choice between a position of defence or offensive. One could hold you back, but the other is much more positive, allowing you to push your company into new areas and target your approach so that you achieve exactly what you need to.

 

The defensive approach

Data defence is the traditional approach to managing the information that your company holds. It’s all the regular things that have to be done with large amounts of data, such as maintaining security to make sure none of it leaks or is compromised. It’s the governance of data, the everyday handling of it and the processes around it.

This also includes ensuring privacy and making sure that the quality of the data is up to scratch. These are certainly things that have to be done with data that is gained in a commercial context, and many of them are done to make sure that your business falls in line with whichever set of regulations you have to adhere to.  It’s a case of preventing data from becoming a problem – rather than seeing it as a valuable asset.

This is the approach that many companies take towards data, and the one that can seem to be sensible and correct. That is, until you look further than the data defence attitude and closer at what could otherwise be done. There are opportunities to take the data that you have and use it to push your business on to the next step.

 

Go on the offensive

Being on the data offensive is about taking the wealth of information that you have at your disposal and exploring the possibilities of what it can do for your business in a proactive sense. Whereas data defence is about making sure that everything is in order, data offensive sees you pushing the boundaries and creating new opportunities.

The data that you have at your disposal can open doors for your business that were closed before. This information can support marketing and help to target outbound campaigns, making sure you are reaching the right people in the right way. In turn, this helps to build new revenue, all of which can lead to further data being gathered as time goes on.

Data management can be at the forefront of your company’s strategy rather than being something that simply has to be done. In the modern, digital world the companies that are using data well are those that are harnessing its power and using it to change their behaviour and the way they work. Data is driving their behaviour and they are allowing it to take the lead rather than letting their existing behaviour govern the way data is collected and protected.

 

A light in the dark

There is another kind of data out there that might not seem so full of opportunity until it is put under the microscope and given a closer look.

Dark data, as it is known, is the information that tends to be ignored by businesses and just builds up in the background over time. This could be server logs, data about old employees, and outdated login information, for example. In his book Dark Data: A Business Definition, Isaac Sacolik describes it as “data that is kept ‘just in case’ but hasn’t (so far) found a proper usage.” (1)

Much of this data will be seen as having little or no value to your firm, and simply something that is given the minimum amount of attention to make sure it is secure and stored correctly. But harnessing this data can be a big step in the process of moving towards data offensive and taking your company forward.

Any business that finds itself in possession of a significant amount of dark data needs to look at how to harness the opportunities that it can create, and how to capitalise on that information and turn it into something proactive rather than letting it impact your business’ resources.

While dark data can be turned to good use and create opportunities, the failure to do this could pose a risk to your company. Instead of letting it become a burden on your business, why not turn dark data into something positive?

Most companies are currently stuck in the data defence approach, but there are new solutions to this problem that can put you on the offensive. Dark data could be the key to where you go next, helping you to explore new avenues that you hadn’t thought of before. This approach will become even more effective as data analytics tools become standardised and the ability to pull information from the unlikeliest of sources increases through technology such as IoT sensors.

There is a wealth of information that any company builds up over time, and the choices are either to let it become a drain on what you do or harness the power that it can give you and allow it to take you forward.

 

References:

 

 

Image provided courtesy of KamiPhuc

 

5 Questions about Information Governance in 5 Minutes: Who Should Own Information Governance?


Interesting video about data governance. This is the second video in our series, “5 Questions about Information Governance in 5 Minutes.” In this video IG experts answer the tricky question, “Who Should Own Information Governance?”

Thanks to http://barclaytblair.com/2013/04/16/5-questions-about-information-governance-in-5-minutes-who-should-own-information-governance/

Using Varonis: Involving Data Owners – Part II


(This is one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

If your doctor said “Your blood pressure is 120/95” would that mean anything to you?  Even if you could interpret that data as symptomatic of stage 1 high blood pressure, would it be actionable?  A helpful doctor would not only help you understand your vital stats, she’d also empower you to make informed decisions about your health.

Likewise, not only should we deliver targeted reports to data owners, we should ensure that the information is actionable and provokes intelligent, data-driven decisions.

The next step in the Operational Plan is to help owners make informed decisions about who should have access to their data, and make sure they’re decisions can be executed without bogging anyone down in paperwork. With DataPrivilege we can do exactly that.

Entitlement Reviews

One of the first actions data owners can take is to re-certify access to their data through an attestation, or entitlement review. At a high level, the owner will review the list of users who have access, and users who probably shouldn’t have access to their data, make any appropriate changes, and then commit those changes to file systems or directory services. What has typically been a very manual and time-intensive (for IT) task can be completely automated with DataPrivilege, the internal web-based interface into the Varonis Metadata Framework.

Once configured, DataPrivilege Entitlement Reviews offer automatic, web-based forms delivered on a regular basis that show data owners exactly who has access to their data, highlighting any users that DatAdvantage recommends for removal based on its automated analysis. These recommendations show owners those users who have likely moved on to other roles, left the company, or were added by mistake.  Varonis’ recommendation engine is like the doctor withextremely trustworthy advice on how to immediately improve your health.

These entitlement reviews can be set up for data sets—reviewing the users with access to a specific folder or share—and/or for security groups or mail-enabled distribution lists. This means an organization is able to effectively shift the burden for access reviews for all data to its rightful owner, as well as leverage the same system for application and other group reviews.

Authorization Workflow

While entitlement reviews are key to correcting and maintaining access controls, it’s also important to involve owners at the “point of sale,” when access is initially requested by a user. Traditionally, access control approval has often come from the manager of the requesting user, a group owner that may or may not be aware of what data that group grants access to, or IT rather than the actual Data Owner. This is a problem, since that’s not usually the person who has the best context to make good access control decisions.  To continue our metaphor—it’s like allowing the pharmacy decide which medicine we should take.

DataPrivilege changes this model by offering an authorization workflow that puts decisions into the hands of owners and their designated delegates. A big part of operationalizing DataPrivilege is transitioning this approval process from IT to the end users and owners themselves. It can mean significant operational resource gains for IT as well as a higher level of service and data protection.

Self-Service Portal

The last thing I want to mention about DataPrivilege is the Self-Service Portal, which allows Data Owners to get information and make decisions on-demand. The DataPrivilege portal lets owners see—at any time—information about their data, including permissions, log information and statistics.

We’ve found that many of our customers have seen impressive results once they deploy the portal to their users. If you give owners information about their assets and the ability to make decisions, they tend to use it. The Self-Service Portal is another way IT can shift the management burden to owners themselves.

Empowering owners to implement policy is a great first step, but Data Privilege also offers the ability to automate a lot of this work. The next step in the Varonis Operational Plan involves setting up and deploying automatic rules. Stay tuned!

Using Varonis: Which Data Needs Owners?


(This one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

Which Data Needs Owners?

In a single terabyte of data there are typically around 50,000 folders or containers, about 5% of which have unique permissions. If IT were to set a goal of assigning an owner for every unique ACL, they’d need to locate owners for 2,500 folders. That’s quite daunting. And most organizations aren’t dealing with a single terabyte of data; in fact, many enterprise installations we encounter are dealing with multiple petabytes of unstructured data. Clearly we need a more surgical approach to assign owners.

Varonis tackled this problem with a longtime customer who needed to identify and assign owners for more than 200 terabytes of CIFS data on their fleet of NetApp filers. There were about 40,000 users in the company, approximately 3,000 of which (as it turned out) needed to be as designated owners for some data.

When we started taking a close look at specific folders, we discovered that many of them (especially at the top of the hierarchy) simply didn’t need an owner; the only users who could read or write data, according to the ACL, were either services accounts or administrative/IT.

What we needed was a methodology for locating the folders where business users had access and a way to identify the likely owner for just those folders. So that’s what we built.

The logic went like this:

  • Identify the topmost unique ACL in a tree where business users have access.
  • If that ACL’s permissions allow write access to users outside of IT, it’s considered a “demarcation point.”
  • For what’s left, identify higher-level demarcation points where non-IT users can only read data.
  • For each demarcation point, identify the most active users
  • Correlate active users with other metadata, such as department name, payroll code, managed by, etc.

The end result of this process is that each demarcation point has a likely ownership candidate. For this particular customer, the next step was to go through a survey process to confirm ownership of each demarcation point with the likely owners (as determined by Varonis’ reports). Any data without a confirmed owner was locked down to remove non-IT access and underwent a separate disposition process.

Other customers have since added content classification and other risk factors in order to better prioritize the data ownership assignment process. With a good classification scheme in place, IT is able to start assigning owners to the most critical data first.

The key takeaway from this process is we can use DatAdvantage to quickly identify the folders that need owners as well as likely owners, so IT doesn’t need to make decisions about 2500 folders per terabyte of data.

While this report was a originally a customization for one customer, we’ve now baked it right into DatAdvantage as report 12M – Recommended Base Folders.

Now that we know who our owners are, the next step is to start getting them involved. My next few posts will cover exactly how we do this using both DatAdvantage and DataPrivilege.

Stay tuned!

12 Tips to Prevent your Sensitive Data Becoming a Wikileaks Headline


By David Ricketts Head of Marketing C24

 

Recent worldwide controversies surrounding confidential material being supplied to unauthorized people and sites such as Wiki Leaks by anonymous whistle-blowers should act as a catalyst for organisations across the globe to take control of data governance and offer a guarantee that employees have access to only the information they need.

 

In our experience we have found that employees responsible for the IT function are finding it increasingly difficult, and in some cases impossible, to manage many elements of data governance within their organisation.  Below are some tips that explain the steps that organisations in charge of permission management of employee data access need to take to safeguard their data. By taking these steps, the IT function will be able to understand who can access, who is accessing, who shouldn’t have access, and who owns the data, and remediate risk faster than traditional data governance and classification methods.

 

At present, IT professionals – rather than the people that create the data (be it a spreadsheet, PowerPoint presentation or company report) – are the ones making many of the decisions about permissions, acceptable use, and acceptable access review. However, as IT personnel aren’t equipped with adequate business context around the growing volumes of data, they’re only able to make a best effort guess as to how to manage and protect each data set.

 

Until organisations start to shift the decision making responsibility to business data owners, it is IT that has to enforce rules for who can access what on shared file systems, and keep those structures current through data growth and user role changes. IT needs to determine who can access data, who is accessing it, who should have access, and what is likely to be sensitive.

 

Here are the top must-do actions for the IT team’s ‘to do’ list, to carry out as part of a daily data management routine for senior executives, to create a bench mark for data governance:

 

1          Identify Data Owners

The IT department should keep a current list of data business owners (e.g. those who have created original data) and the folders and sites under their responsibility. By having this list “at the ready,” they can expedite a number of the data governance tasks, including access authorisation, revocation and review, and identifying data for archival. The net effect of this simple process is a marked increase in the accuracy of data access entitlement and, therefore, data protection.

 

2          Remove global groups and perform data entitlement reviews

It is not uncommon for folders on file shares to have access control permissions allowing “everyone,” or all “domain users” (nearly everyone) to access the data contained. This creates a significant security risk, for any data placed in that folder will inherit those “exposed” permissions, and those who place data in these wide-open folders may not be aware of the lax access settings. Global access to folders should be removed and replaced with rules that give access to the explicit groups that need it.

 

3          Audit Permissions Changes

Access Control Lists are the fundamental preventive control mechanism in place to protect data from loss, tampering, and exposure. IT requires the ability to capture and report on access control changes to data – especially for highly sensitive folders. If access is incorrectly assigned or changed to a more permissive state without good business reason, IT and the data business owner must be quickly alerted, and able to remediate the situation.

 

4          Audit Group Membership Changes

Directory Groups are the primary entities on Access Control Lists (Active Directory, LDAP, NIS, etc.); membership grants access to unstructured data (as well as many applications, network gateways, etc.). Users are added to existing and newly created groups on a daily basis.

 

5          Audit Data Access

Effective management of any data set is impossible without a record of access. Unless you can reliably observe data use you cannot observe its misuse, abuse, or non-use. Even if an IT department could ask its organisation’s users if they used each data set, the end users would be unlikely to be able to answer accurately—the scope of a typical user’s access activity is far beyond what humans can recall.

 

6          Prioritise Data

While all data should be protected, some data needs to be protected much more urgently than others. Using data owners, data access patterns, and data classification technology, data that is considered sensitive, confidential, or internal should be tagged accordingly, protected and reviewed frequently.

 

7          Align Security Groups to Data

Whenever someone is placed in a group, they get file system access to all folders that list the group on its ACL. Unfortunately, organisations have completely lost track of what data folders contain which Active Directory, SharePoint or NIS groups. It is impossible to align the role with the right data if the organisation cannot verify what data a group provides access to.

 

8          Lock Down, Delete, or Archive Stale, Unused Data

Not all of the data contained on shared file servers, and network attached storage devices are in active use. By archiving stale or unused data to offline storage or deleting it, IT makes the job of managing the remainder simpler and easier, while freeing up expensive resources. At the very least, access to inactive data should be tightly restricted to reduce the risk of loss, tampering, or theft.

 

By automating and conducting the ten management tasks outlined above frequently, organisations will gain the visibility and auditing required that determines who can access the data, who is accessing it and who should have access.

 

9     Review data entitlement (ACL)

Every file and folder in a file system system has access controls assigned to it which determine which users can access the data and how (i.e. read, write, execute, list). These controls need to be reviewed on a regular basis and the settings documented so that they can be verified as accurate by data business owners and security policy auditors.

 

10  Revoke unused and unwarranted permissions

Users with access to data that is not material to their jobs constitutes a security risk for organisations. Most users only need access to a small fraction of the data that resides on file servers. It is important to review and then remove or revoke permissions that are unused. IT should have the ability to capture and report on access control changes to data – especially for highly sensitive folders. If access is incorrectly assigned or changed to a more permissive state without good business reason, the data business owner will be able to quickly identify and mitigate the situation by reporting the inconsistency to IT.

 

 

11          Delete unused user accounts

Directories may at times contain user accounts for individuals that are no longer with the company or group. These accounts constitute a security hole. Those with a working knowledge and access to user directories may retrieve information under someone else’s name. Organisations should routinely identify inactive users and verify that the need for the account is still there.

 

12          Preserve all user access events in a searchable archive

Even for environments where the user-to-data permissions are current and accurate, it is important to maintain a searchable archive of all user access events. This will help organisations with triage and forensic analysis should data misuse or loss occur. IT should be able to search on a username, filename as well as date of interest and any combination thereof to ascertain who accessed what and how. This information can also help expedite helpdesk call resolution.

 

 

What Are You Waiting For?

The biggest hurdle to overcome with this ‘to do’ list is the amount of time conducting these checks on a daily basis requires, if it is even possible! It is imperative that businesses support their internal IT function by allowing them to utilise tools such as Varonis so as to enable them to adopt best practice techniques so that they can manage the business critical areas highlighted in this report.

 

If you would like further information about any of the areas highlighted in this report please do not hesitate to call C24 or visit http://www.c24.co.uk

Should a company’s executives drive data governance and regulation, or its IT department?


Data governance is one of those amorphous terms that businesses struggle to define, much less implement. In broad strokes, it involves the implementation of processes and methods that govern how data analysts and others within an organization can handle and process data.

That sort of control—even in the name of regulations and quality—is liable to spark political infighting within even the most sedate organization. Does the need to quickly analyze data outweigh the risks of regulatory fines? Will the implementation of data security interfere with the efficiency of analysis? But with more and more regulations in place, business executives and IT departments have little choice but to wrestle with the issue.

“The stakes are high when it comes to data-intensive projects, and having the right alignment between IT and the business is crucial,” Michele Goetz, an analyst for Forrester, wrote in an Oct. 4 corporate blog posting. “Data governance has been the gold standard to establish the right roles, responsibilities, processes, and procedures to deliver trusted secure data.”

Policies and procedures can weed out bad data and faulty implementations, she added, making governance more crucial than ever. However, most governance is focused on risk avoidance and led by a company’s IT department, with the business side of things contributing relatively little to the discussion. That massive amount of management and process, in turn, “takes time and stifles experimentation and growth.”

Yet companies need data analysis to happen in a speedy enough way to make said data actually useful to strategy; recall how Nucleus Research, in a study released over the summer, suggested that the average half-life of data for tactical decision-makers is 30 minutes or less, while strategically-oriented data tends to go stale after only a few days. As a result, days’ worth of check and balances can rapidly degrade the useful of data.

“Data governance needs to evolve to develop policies that are not just about what you can’t do, but what you can do,” Goetz wrote. “If you really want your data governance program to mature and truly be business led, the greatest pivot will be for IT to give up control of the data and the facilitation of data governance.”

In other words, give business control: “Have the business take over and define the amount of governance and control it wants over its use. Have the business create a framework that aligns trust in data with use.”

Whether or not one agrees with Goetz that business needs more control over data governance, the fact remains that the increasing amount of data handled by organizations—and the increasing pressure to analyze it for insight—can lead to slowdowns and paralysis without a plan and structure. Some organizations are wrestling with this brave new world by hiring chief data officers to handle everything from data stewardship to communicating data schemas. Others are embracing self-service B.I. solutions that help automate and wrangle data without the need for quite so much active effort on employees’ part.

http://slashdot.org/topic/bi/does-business-or-it-drive-data-governance/

 

Data Migration a Security Threat: Varonis


Image representing Varonis Systems as depicted...
Image via CrunchBase

Organizations are potentially exposing themselves to data breaches during migrations, and many don’t have confidence their data is secure, according to a Varonis survey.While 95 percent of organisations move data at least once per year, 65 percent of companies said they are not confident sensitive data was protected during a migration, according to an August survey of C-Level IT executives conducted by data governance software specialist Varonis Systems. The survey found 96 percent of respondents reported concerns when performing data migrations, with many leaving their data overexposed and vulnerable. The results suggest a growing data security problem that could affect the vast number of businesses performing data migrations and consolidations.

Organizations most commonly move data from one file server to another or to network attached storage (NAS) (80 percent), between domains (44 percent) and from file shares to SharePoint (40 percent). Two-thirds of organizations report that they usually move more than 1TB of data at a time, for a variety of reasons, including infrastructure upgrades and organizational changes–for example, a merger or acquisition. On the security side, 35 percent of those surveyed reported that they were very confident sensitive data would only be accessible to the right people during a migration.

“The survey underscores that maintaining who has access to what is an ongoing problem for organizations. The scale of the problem that organizations face when moving terabytes of data may be surprising, as a typical terabyte contains about 50,000 folders, and of those folders about 5 percent, or 2,500 folders, have unique permissions,” David Gibson, Varonis vice president of strategy, said in a prepared statement. “An average access control list (ACL) contains three to five security groups, and a typical group contains anywhere from five to 50 users, as well as other groups that contain even more users and groups. Let’s say each access control list represents 5 minutes of work to re-create—that’s over 200 hours of work per terabyte of data moved.”

About one-third of respondents described themselves as being very confident that sensitive data will be accessible to the correct people during a move, but only 20 percent reported that maintaining permissions is not an issue. Seventeen percent of respondents reported it as a significant issue, 49 percent reported it as a slight-to-moderate issue and a worrying 14 percent said they are aware of the issue but have not addressed it.

“Data and domain migrations are a big part of IT’s day-to-day activities. Organizations already face challenges maintaining availability, data integrity and confidentiality during a migration, not to mention identifying the data that should be moved and who it belongs to,” the report concluded. “With no slowdown in data growth in sight, IT organizations should anticipate that more migrations and archival projects will need to fit into their already busy schedules.”

Data security fears also are affecting adoption of cloud services, an earlier Varonis report found. That survey revealed that while 80 percent of companies do not allow their employees to use cloud-based file synchronization services, 70 percent of companies would use these services if they were as robust as internal tools. Only 20 percent of survey respondents said they currently allow file synchronization technology services due to fears of data leakage, security breaches and compliance issues.

Thanks to http://www.eweek.com

 

Proportionality in Ediscovery: Getting Beyond the Academic and Practitioner Perspective


Interesting points from a e-legal blog
Point 1: The expanding digital universe will exceed 35 zettabytes by 2020, IDC predicts.
In 2009, global digital data topped 800,000 petabytes and was projected to reach 1.2 mil­lion petabytes in 2010. Storing 1 million peta­bytes on DVD would generate a stack of discs that reaches the moon and back. However, that rate of growth—62% in one year—pales compared with IDC’s prediction that the figure will top 35 zetta­bytes (36.7 million petabytes) by 2020, or 44 times as much as 2009. That stack of DVDs would reach halfway to Mars.

(following graphic originally posted by Tech News Ninja here)

Point 2: Usage of Social Media is increasing: (from comScore‘s US Digital Year in Review 2010)
Point 3: Social media represents significant ediscovery challenges:
The SCA is a formidable obstacle for parties looking to collect data from a social network.  Often the only option is to seek voluntary waiver by the person of interest.  Needless to say, more often that not any request to collect and analyze this type of data will need to be targeted and precise so as to avoid privacy concerns and other rights.  If the information is available on a public-facing portal of a social network then the collection may be easier to accomplish though the ability to do a targeted collection is somewhat limited by the user interface and/or local API.  Further it is difficult to think of this dynamic and changing data as a “document” under traditional ediscovery practices and so reviewing and analyzing presents unique challenges.
Point 4: Data Governance is becoming a stronger practice and discipline – it is also on the rise: (graphic created by DAMA.org)
 
Conclusion: Data – how we use it, how we access it, where we create it – is changing.  All of this leads to more and more data from more and more sources.  The MDM/Data Governance movement is seeking to organize data inside organizations and seeks to make information (which is what data contains and transports) more accessible.  So while the universe of data grows so does the ability to seek and capture only the relevant or useful information (See graph below for a non-scientific illustration.)  So proportionality could eventually be “built into” our ediscovery methods and practices – it simply will not be feasible any other way.

 

Great video for Varonis and Data-advantage for Microsoft Exchange


The Challenge

Microsoft Exchange installations containing huge amounts of semi-structured data can present immense protection and management challenges:

  • Permissions: Determining who has access to Exchange mailboxes and public folders, including shared and delegated mailbox permissions.
  • Access Auditing: IT can’t answer pressing questions like, “Who accessed my email or calendar?” or “Who sent email on my behalf?”
  • Data Ownership: IT can’t reliably identify business owners of public folder data, and even some mailboxes.
  • Operational: Manual permissions and group changes are untested and unreliable.
  • High Risk: Stale, excess permissions are rarely revoked. Data open to the Anonymous group can be difficult to identify and remediate. Critical data is exposed.

The Varonis Solution

Varonis® DatAdvantage® addresses these challenges by aggregating Active Directory user and group details, ACL information and all data access events—without requiring native OS auditing—to build a complete picture of who can and who is accessing data, and who should have their access revoked. It also leads IT to rightful data owners, so the right people can ensure appropriate access and usage.

“With Varonis® DatAdvantage® for Exchange, we have significantly reduced our Exchange access and data management workload for tasks that we do many times every day. We now have a single console with a complete map to our ever-growing Exchange environment that has enabled our staff to identify and proactively manage and protect Exchange data.” – Bernard Besohe
Publications Office of the European Union