Skip to content Skip to sidebar Skip to footer

Supporting Mainframe Data Compliance: A UX Approach

Designing products that help companies comply with multiple data security regulations (like GDPR, HIPAA, or PCI-DSS) requires considering not only the needs of individuals whose information must be kept secure, but the needs of security professionals within these companies. Supporting regulatory compliance for mainframe data requires confidently identifying sensitive data and pinpointing its location within a mainframe environment. These environments can be unfathomably vast, and data security professionals in charge of compliance activities often lack the needed permissions to confirm data identities directly from their source. Compounding the issues of data environment scope and data complexity further is the need to comply with multiple sets of data regulations faced by some organizations.

These issues create a quandary for those of us tasked with designing products to maintain regulatory compliance. How can we leverage UX principles to help data security professionals protect sensitive data on the mainframe without violating the data regulations they must uphold? Also, can we do it in a way that improves awareness of an organization’s data landscape over time?

Abstract image depicting moving data (Credit: Pexels)

Figure 1. Data is everywhere.

At the end of the day, the problems presented in the above scenario are not new to UX professionals. Users’ ultimate goal is to make a decision on data identities in order to determine if further compliance activities should occur. Although user access to the data’s source may be restricted, clues from the data environment can surface to support the identification process. Such support requires presenting metadata as “clues” at appropriate times. Finally, ensuring visibility and understandability of those clues necessitates managing the large volume of content within the data environment to see the “signal” through all of the “noise.”

Identifying the features necessary to achieve the above-mentioned tasks requires building knowledge about data security professionals, their work environments, and the data regulations with which their organizations must comply. This article presents a few strategies to design features that assist with compliance processes for mainframe data that leverage such knowledge.

Managing Large Volumes of Data

Mainframe data environments can be so large that users require assistance managing the sheer volume of data. The best methods for managing enormous volumes of data empower users to take control of the data environment instead of relegating them to passive roles. Allowing users to designate known sensitive data sources for regulation compliance takes a step toward such empowerment. For example, if a particular database is known to contain personally identifying information that would matter for GDPR compliance, users could have a location within the product to designate the database as a known sensitive data source. Users could then handle designated data sources in ways that makes sense for their organization, which could include anything from exempting them from scheduled compliance scanning to checking their contents on unique schedules.

AI and machine learning are viable approaches to managing large volumes of data, especially in the data identification process. Machine learning in particular can identify sensitive data with varying levels of certainty that will increase over time with user input on the quality of findings. The upfront time cost with the introduction of machine learning could prove steep especially as the size of the data environment grows, and it will likely take users away from other work tasks. This time cost will drop as the algorithm learns the features of the data environment, allowing users to focus on other mission critical tasks in their environment. In both of these instances, the question becomes how to impress upon users that the algorithm needs their feedback in order to function optimally.

Capturing Users’ Attention

Maintaining data compliance requires vigilance; however, tethering users to one UI for large portions of their workday as an algorithm learns is not always realistic. Once the algorithm has adequately learned the data environment and users become focused elsewhere in their work environment, it may not always be possible to return to the UI when user feedback is needed. In both circumstances, there is a balance to strike between remaining compliant and ensuring the safety and health of key systems. Successfully balancing the attentional split begins with gaining users’ attention and then accurately conveying an algorithm’s needs. How can the product clearly let users know it needs their input, and how can it convince them of the need for the input sooner rather than later?

A very common way to accomplish both of these things involves notifications. Notifications require careful design in terms of frequency, anatomy, and language to establish maximum effectiveness. If notifications arise too frequently, perceived oversensitivity can diminish perceptions of the tool’s accuracy. Similarly, notifications arising too infrequently can create perceptions of insensitivity, again leading to assumptions of tool inaccuracy. Finding the right frequency for notifications depends on the user, the organization’s policies around data compliance, and the work environment, so users should have the ability to tune notifications in ways that enable them to effectively contribute to data compliance activities.

The construction of a notification, its anatomy and language, conveys importance to users. Within most work environments, users appreciate when they can divine the importance of notifications at a glance. Therefore, designing notifications with cues for color and size presented in a logical order could convey importance with minimal cognitive processing. The design of the notification should serve as a prompt for users to read its details, which should convey product needs and direct users to take specific actions. Selecting a tone that encourages users to access the product for a clearly conveyed purpose, can help convince users to shift their attention back to the product from other tasks in the work environment.

Supporting User Verification of Sensitive Data Findings

Magnifying glass to symbolize investigation of content (Credit: Pexels)

Figure 2. Analyze content.

When identifying sensitive data as part of compliance activities, particularly when the burden of identification falls upon machine learning algorithms, users need the ability to verify the accuracy of an algorithm’s findings. However, given the lack of access rights to sensitive data faced by users of data compliance products, UX professionals cannot simply design displays to show contents of data sources, nor should the solution be to force users to hunt down a data source’s owner for verification. Instead, a realistic solution arises from empowering users to manage when they should review matches via settings for confidence levels (for example: high, medium, or low match). Beyond designing confidence level settings, UX professionals should collaborate with content writers to tell a story about the data needing verification by surfacing metadata to users as part of explanations for how the machine learning algorithm arrived at its decision. This explanation would not only help users make a decision regarding the accuracy of the algorithm’s identification, but it would provide insights into how the identification process works.

Guiding User Actions

As mentioned earlier, machine learning algorithms that help users identify data need user input to learn the data environment. Based upon confidence levels and metadata, the product can present users with a response set, such as “agree” or “disagree,” to elicit the needed feedback to help train the algorithm to identify data. However, for users to truly understand the algorithm’s progress in learning the data environment, the request for feedback should present itself with a recommended default response that reflects the algorithms judgment. So, if the confidence level is medium and the metadata supports the identification, then the default between the response options could be “agree.” This would suffice for the identification of sensitive data and serve as a starting point for completing a data compliance workflow. This helps users engage with the algorithm quickly and efficiently regardless if they agree with the provided identification or not.

Going Further: Selecting Remediation Activities

Remaining compliant with the various data regulation sets requires remediating any violations in how organizations treat sensitive data. So, once sensitive data is identified, users need to act on it according to the applicable regulations. However, this requires specific knowledge of regulations, and with the sizes of some regulation sets and the requirement to comply with multiple regulations sets, detailed knowledge can take some time to build. Therefore, a key feature at this point should be presenting users with recommended remediation activities based upon applicable regulations. The user could choose to follow the recommendation or proceed differently, knowing they have received a full brief of information necessary to inform their choice.

Final Thoughts

Lit lightbulb signifying dawning of other ideas (Credit: Pexels)

Figure 3. New ideas.

Data compliance begins with sensitive data identification, which is a sometimes very tricky task to accomplish considering the possible restrictions faced by users. Often times, users either lack direct access to sensitive data sources or a compliance product cannot be permitted to surface sensitive data to users, making it difficult to confirm data identifications within data sources. Despite data access restrictions, there is still an abundance of information to surface to users, and some of the strategies outlined above can help you identify what is most important to surface as well as ensuring that users see and respond to it. When information about user’s data environments and regulations are shared with users at the right time, they build knowledge over time and become an indispensable resource to their organizations and can stay ahead of the compliance game.

Leslie McFarlin is the Senior UX Researcher for Enterprise Data Protection products and Lead Researcher for the AIUX group at CA Technologies. She has 14 years of experience leading quantitative and qualitative research for software, web, print, and physical products. Twitter: @ LivLuvsSkynet