The Clearingouse consist of two services: The Clearinghouse-EDC and the Clearinghouse-App. The Clearinghouse-EDC is used to terminate IDS connections and map the requests to the API of the Clearinghouse-App. The Clearinghouse-App is the brain of the the Clearinghouse and uses complex algorithms to encrypt and store log messages in the MongoDB and provide mechanisms to query for log messages.
Short history lesson
The clearinghouse-app consisted before out of three separate microservices - logging, keyring, document - which were merged into one service. The reason for this is that the services were too tightly coupled and there was no benefit in having them separated. The new service is called just “clearinghouse-app”.
The Clearinghouse-EDC received IDS-Multipart messages of the type ids:LogMessage in the header and an arbitrary payload. The following shows an example of a multipart message:
The logging service (as an entity inside the remaining clearinghouse-app) is responsible for orchestrating the flow between document service and keyring service:
When logging a message, the message consists of two parts, originating from the IDS communication structure. There is a header and a payload.
The logging service creates a process id (if not exists) and checks the authorization.
After all prerequisites are checked and completed, the logging-service merges header and payload into a Document starts to get the transaction counter and assigns it to the Document.
Now the document service comes into play: First checking if the document exists already, then requesting the keyring service to generate a key map for the document. The key map is then used to encrypt the document (back in the document service) and then the document is stored in the database.
Finally the transaction counter is incremented and a reciept is signed and send back to the Clearinghouse-EDC.
There is a randomly generated Master Key stored in the database.
Each document has a number of fields. For each document a random secret is generated. This secret is used to derive multiple secrets with the HKDF Algorithm from the original secret. These derived secrets are used to encrypt the fields of the document with AES-256-GCM-SIV.
The original secret is encrypted also with AES-256-GCM-SIV with a derived key from the Master Key and stored in the database alongside the Document.
Für den aktuellen Betrieb des MDS würden wir auf die Clearinghouse Specification des IDS RAM 4.0 setzen.
Dabie kann das bestehende Clearinghouse angepasst und verbessert werden durch die folgenden Punkte:
Austausch des Trusted Connectors mittels EDC
Zusammenführung der MS zur CH-APP
Austausch des Webservers Rocket durch Axum
Wartung und Optimierungen
Stabilität durch Mutex
Update der Dependencies
Dadurch ist das Clearinghouse IDS RAM 4.0 complient und rückwärts kompatibel mit EDC MS8
Blockchain
Masterkey
Future
Im DSP wird es kein Clearinghouse wie es in der IDS RAM 4.0 spezifiziert mehr geben.
Das Clearinghouse wird vom DSP ledeglich als Teilnehmer gesehen.
Dabei werden die Logs der Connectoren dezentral nur im jeweiligen Connector liegen.
Das Clearinghouse im bereich Logging könnte somit einen Vertrag mit allen Connectoren schließen um diese Logs anzufragen.
In Hinblick auf die anstehende Migration zu did:web bietet das Clearnghouse einen sinnvollen Ersatz für den DAPS.
Das Clearinghouse könnte Verifiable Credentials ausstellen, sobald die Teilnehmer den Vertrag mit diesem eingegangen sind und die Grundvorraussetzungen um am Dataspace zu partizipieren erfüllt sind. Jeder Teilnehmer darf nur mit Mitgliedern des Dataspaces interagieren, die dieses Verifiable Credential vorweisen können.
Dadurch wird sichergestellt das alle Teilnehmer am Datenraum das Clearinghouse akzeptieren.
Bevor eine Transaktion stattfindet, wird ein Vertrag geschlossen. In diesem Schritt könnte der Prozess im Clearinghouse bereits angelegt werden. Hierbei ist es auch möglich, mehrere Connector IDs anzugeben, um festzulegen, wer Lese- und Schreibrechte besitzt.
Die erstellte PID muss mit allen Connectoren geteilt werden.
Die Connectoren können auf die gleiche PID loggen, um die Transaktionen nach Verträgen zu gruppieren.
Der MDS kann seinen eigenen Connector als Standard festlegen, um Zugriff auf alle Transaktionen zu erhalten.
Data in the Clearing House is stored encrypted and practically immutable. There are multiple ways in which the Clearing House enforces Data Immutability:
Using the Logging Service there is no way to update an already existing log entry in the database
Log entries in the database include a hash value of the previous log entry, chaining together all log entries. Any change to a previous log entry would require rehashing all following log entries.
The connector logging information in the Clearing House receives a signed receipt from the Clearing House that includes among other things a timestamp and the current chain hash. A single valid receipt in possession of any connector is enough to detect any change to data up to the time indicated in the receipt.
The IDS Clearing House Service currently implements the Logging Service. Other services that comprise the Clearing House may follow. The Clearing House Service consists of two parts:
The Clearing House App is a REST API written in Rust that implements the business logic of the Clearing House. The Clearing House Processors is a library written in Java that integrates the Clearing House App into the Trusted Connector. The Clearing House Processors provide the multipart and idscp2 endpoints described in the IDS-G. These are used by the IDS connectors to interact with the Clearing House. Both Clearing House App and Clearing House Processors are needed to provide the Clearing House Service.
The Clearing House Service API requires a Trusted Connector Trusted Connector (Version 7.1.0+) for deployment. The process of setting up a Trusted Connector is described here. Using a docker image of the Trusted Connector should be sufficient for most deployments:
The Clearing House Processors are written in Java for use in the Camel Component of the Trusted Connector. To configure the Trusted Connector for the Clearing House Service API, it needs access to the following files inside the docker container (e.g. mounted as a volume):
clearing-house-processors.jar: The Clearing House Processors need to be placed in the /root/jars folder of the Trusted Connector. The jar file needs to be build from the Clearing House Processors using gradle.
clearing-house-routes.xml: The camel routes required by the Clearing House need to be placed in the /root/deploy folder of the Trusted Connector.
application.yml: This is a new configuration file of Trusted Connector 7.0.0+. The file version in this repository enables the use of some of the environment variables documented in the next section.
Besides those files that are specific for the configuration of the Clearing House Service API, the Trusted Connector requires other files for its configuration, e.g. a truststore and a keystore with appropriate key material. Please refer to the Documentation of the Trusted Connector for more information. Also, please check the Examples as they contain up-to-date configurations for the Trusted Connector.
The Clearing House Processors can override some standard configuration settings of the Trusted Connector using environment variables. If these variables are not set, the Clearing House Processors will use the standard values provided by the Trusted Connector. Some of the variables are mandatory and have to be set:
TC_DAPS_URL: The url of the DAPS used by the Clearing House. The Trusted Connector uses https://daps.aisec.fraunhofer.de/v3 as the default DAPS url.
TC_KEYSTORE_PW: The password of the key store mounted in the Trusted Connector. Defaults to password.
TC_TRUSTSTORE_PW: The password of the trust store mounted in the Trusted Connector. Defaults to password.
TC_CH_ISSUER_CONNECTOR(mandatory): Issuer connector needed for IDS Messages as specified by the InfoModel
TC_CH_AGENT(mandatory): Server agent needed for IDS Messages as specified by the InfoModel
SERVICE_SHARED_SECRET(mandatory): Shared secret, see Configuration section
SERVICE_ID_TC (mandatory): Internal ID of the Trusted Connector that is used by the Logging Service to identify the Trusted Connector.
SERVICE_ID_LOG (mandatory): Internal ID of the Logging Service.
Please read the configuration section of the service (Logging Service, Document API, Keyring API) you are trying to run, before using docker run oder docker-compose. All Containers build with the provided dockerfiles require at least one volume:
The configuration file Rocket.toml is expected at /server/Rocket.toml
Containers of the Keyring API require an additional volume:
/server/init_db needs to contain the default_doc_type.json
Containers of the Logging Service require an additional volume:
The folder containing the signing key needs to match the path configured for the signing key in Rocket.toml, e.g. /sever/keys
The Clearing House services use signed JWTs with HMAC and a shared secret to ensure a minimal integrity of the requests received. The Trusted Connector as well as the services (Logging Service, Document API, Keyring API) need to have access to the shared secret.
For production use please consider using additional protection measures.