FlowTrack takes the confidentiality and integrity of its customer data very seriously. As stewards and partners of FlowTrack Customers, we strive to assure data is protected from unauthorized access and that it is available when needed. The following policies drive many of our procedures and technical controls in support of the FlowTrack mission of data protection.
Production systems that create, receive, store, or transmit Customer data (hereafter "Production Systems") must follow the requirements and guidelines described in this section.
FlowTrack policy requires that:
(a) Data must be handled and protected according to its classification requirements and following approved encryption standards, if applicable.
(b) Whenever possible, store data of the same classification in a given data repository and avoid mixing sensitive and non-sensitive data in the same repository. Security controls, including authentication, authorization, data encryption, and auditing, should be applied according to the highest classification of data in a given repository.
(c) Workforce members shall not have direct administrative access to production data outside the approved production access times granted which may exclude during normal business operations. Exceptions include emergency operations such as forensic analysis and manual disaster recovery.
(d) All Production Systems must disable services that are not required to achieve the business purpose or function of the system.
(e) All access to Production Systems must be logged, following the FlowTrack Auditing Policy.
(f) All Production Systems must have security monitoring enabled, including activity and file integrity monitoring, vulnerability scanning, and/or malware detection, as applicable.
Data is classified and handled according to the FlowTrack Data Handling Specifications and Data Classification document.
Critical, confidential and internal data will be tagged upon creation, if tagging is supported. Each tag maps to a data type defined in the data classification scheme, which then maps to a protection level for encryption, access control, backup, and retention. Data classification may alternatively be identified by its location/repository. For example, source codes in FlowTrack’s AWS CodeCommit repos are considered “Internal” by default, even though a tag is not directly applied to each source file. Also, all assets in the AWS Production account is considered "Confidential" and the AWS Test account is "Internal", unless otherwise tagged differently in either.
Critical and confidential data is always stored and transmitted securely, using approved encryption standards. More details are specified in FlowTrack’s Data Classification and Handling document.
All IT systems that process and store sensitive data follow the provisioning process, configuration, change management, patching and anti-malware standards as defined in Configuration and Change Management document.
FlowTrack hosts on Amazon Web Services in the US-East (N Virginia) region by default. Data is replicated or stored across multiple Availability Zones (Data Centers) for redundancy and disaster recovery.
All FlowTrack employees, systems, and resources adhere to the following standards and processes to reduce the risk of compromise of Production Data:
FlowTrack employee access to production is guarded by an approval process and by default is disabled. When access is approved, employee will need to seek temporary approval which is granted in the form of a the Employee's SSH Public Key being signed by FlowTrack's Certificate Authority for a short period of time which will temporary allows access to production. Production access requests and activity logs are reviewed by the security team.
Customer data is logically separated at the database/datastore level using a unique identifier for the institution. The separation is enforced at the API layer where the client must authenticate with a chosen institution and then the customer unique identifier is included in the access token and used by the API to restrict access to data to the institution. All database/datastore queries accessing ePHI or CHD then includes the institution identifier.
For details on the backup and recovery process, see controls and procedures defined in Data Management.
FlowTrack uses AWS CloudWatch/CloudTrail and ELK to monitor the entire cloud service operation. If a system failure and/or alarm is triggered, key personnel are notified by text, chat, and/or email message in order to take appropriate corrective action. Escalation may be required and there is an on-call rotation for major services when further support is necessary.
FlowTrack uses a security agent to monitor production systems. The agents monitor system activities, generate alerts on suspicious activities and report on vulnerability findings to a centralized management console.
The security agent is installed on all on premise Linux servers. It is also built into Amazon Machine Images (AMIs), DockerFiles, or server build scripts for use in FlowTrack AWS environments.
All databases, data stores, and file systems are encrypted with AES-256 using separate keys for each storage type. The keys are rotated periodically.
Encryption and key management for local disk encryption of on-premise servers and end-user devices follow the defined best practices for Windows, macOS, and Linux/Unix operating systems, such as Bitlocker and FileVault.
All external data transmission is encrypted end-to-end using encryption keys managed by FlowTrack. This includes, but is not limited to, cloud infrastructure and third party vendors and applications.
Transmission encryption keys and systems that generate keys are protected from unauthorized access. Transmission encryption key materials are protected with access controls, and may only be accessed by privileged accounts.
Transmission encryption keys use a minimum of 2048-bit RSA keys, or keys and ciphers of equivalent or higher cryptographic strength (e.g., 256-bit AES session keys in the case of IPSec encryption).
Transmission encryption keys are limited to use for one year and then must be regenerated.
For all FlowTrack APIs, enforcement of authentication, authorization, and auditing is used for all remote systems sending, receiving, or storing data.
System logs of all transmissions of Production Data access are kept. These logs must be available for audit.
All internet and intranet connections are encrypted and authenticated using TLS 1.2 (a strong protocol), ECDHE_RSA with P-256 (a strong key exchange), and AES_128_GCM (a strong cipher).
Restricted and sensitive data is not allowed to be sent over electronic end-user messaging channels such as email or chat, unless end-to-end encryption is enabled; for example in Telegram or sending an encrypted and expiring download link.
Data in Use, sometimes known as Data in Process, refers to active data being processed by systems and applications which is typically stored in a non-persistent digital state such as in computer random-access memory (RAM), CPU caches, or CPU registers.
FlowTrack applications implement a row level tenancy identifier to protect data in a multi-tenancy deployment. Application rules are inforced so that requests to access this data must pass the tenancy identifier. In addition, FlowTrack applications may incorporate advanced security features such as Runtime Application Self Protection (RASP) modules and Role Based Access Control (RBAC) for protection of data in use.
FlowTrack uses AWS Key Management Service (KMS) for encryption key management.
KMS keys are unique to FlowTrack environments and services.
KMS keys are automatically rotated yearly.
FlowTrack uses AWS Certificate Manager (ACM) and LetsEncrypt for certificate management.
Certificates are renewed automatically.
Security team monitors the certificates for expiration, potential compromise and use/validity. Certificate revocation process is invoked if the certificate is no longer needed or upon discovery of potential compromise.
When appropriate, FlowTrack engineering should implement "Versioning" and "Lifecycle", or equivalent data management mechanism, such that direct edit and delete actions are not allowed on the data to prevent accidental or malicious overwrite. This protects against human errors and cyberattacks such as ransomware.
In AWS, the IAM and S3 bucket policy in production will be implemented accordingly when the environments are configured. When changes must be made, a new version is created instead of editing and overwriting existing data.
All edits create a new version and old versions are preserved for a period of time defined in the lifecycle policy.
Data objects are "marked for deletion" when deleted so that they are recoverable if needed within a period of time defined according to the data retention policy.
Data is archived offsite -- i.e. to separate AWS account and/or specific region and/or using a multi region data store.
Additionally, all access to sensitive data is authenticated, and audited via logging of the infrastructure, systems and/or application.
Fincosa LLC, 220 Calle Manuel Domenech #2012, San Juan, PR, 00918, USA