Frequently Asked Questions
General Questionsβ
note
Which requirements must be met by the vesselβs IT infrastructure?β
The Hoppe Ship-to-Shore connection does not provide its own VSAT connection. Therefore the client must ensure to have a VSAT connection available. For the communication with the land-based servers, the following IP addresses on the corresponding port for outgoing TCP data traffic of HOMIP2 must be enabled in the vessels firewall.
IP | Port | |
---|---|---|
Primary Address | 75.2.111.192 | 11550 |
Fallback Address | 99.83.166.216 | 11550 |
A detailed checklist can be found in the Ship-to-Shore Fact Sheet provided in the Download section of our product page.
note
Is it possible to load external or historic data into the data pool via interface?β
In terms of data storage, Hoppe Marine utilizes the structure of a βData-Lakeβ. It is based on a concept in which structured data can be loaded into the data pool from a variable source. In this term structured data means e.g. log data, telemetry data etc. In the field of data analysis, time series data in various formats are the most relevant data. Currently, the following formats are accepted: CSV, JSON, SQLite and Parquet. Furthermore, it is important that the client defines the column name-assignments, due to the fact that for evaluation the data must be set to a universal naming standard.
The following example can be given for illustration:
The automation system of a vessel exports data in the following format:
Time stamp Main Engine Power Main Engine Speed
21/06/2018 11:09:33 20111 76
The data must be transformed into a standardized format for better handling. The target format is therefore of the following:
ti.utc.act.ts@last me.pow.act.kw@avg me.rev.act.rpm@avg
1529579373.00 20111 76
The translation of the source data into the target format is thereby defined by so called "Mapping". Please find details about the structure of Mapping in our API documentation (https://docs.hoppe-sts.com). Hoppe Marine is delighted to assist during the initial data import into the data pool and the initial creation of the Mapping.
Securityβ
note
Is the Ship-to-Shore transmission secure?β
The Ship-to-Shore transmission utilizes a multi-level safety concept. This concept distinguishes between Identity Protection, Access Protection and Integrity Protection.
Identity Protectionβ
The first level of the safety concept ensures trustworthiness of the communication partners. Thereby every end point of communication is fitted ex works with a private, cryptographic key. This key never leaves the device and cannot be compromised therefore. Only when a correct key is known the device gets enabled to transmit or receive data.
Access Protectionβ
When it is ensured that data comes from a trustworthy source, a secure TLS-encrypted connection between the communication partners will be established in the next step. This encrypted connection prevents access from third parties, only both communication partners are able to read the data in clear format.
Integrity Protectionβ
After successful transmission of data a further step is implemented to ensure the data is intact and corresponds to the data that has been sent from the vessel. Therefore cryptographic signatures according the industrial standard RFC 7519 are used.
note
How and when does the encryption take place?β
Service/Device | Encryption |
---|---|
Datastore on HOMIP2 | None, but secured by Debian system security and RBAC scheme. |
Export data files | Cryptographically signed with elliptic curve private key. |
Data transport to shore | TLS encryption in transit. |
Data store on shore | AES encryption at rest. No in-memory encryption during data handling. |
Data distribution to customer via API | TLS encryption in transit. |
Data Handlingβ
Data Transmissionβ
note
How is the data transmission working?β
The data transmission is file-based. The interval for data exports can be configured. For data transmission a satellite connection is established and for transmitting the data, a direct, encrypted connection to fixed IP addresses without DNS is realized. All files are collected as database files and transmitted together in blocks. By using the Hash-method, the entire file content is subject to an integrity test. The transmission of encrypted data underlies a bandwidth regulation.
note
What kind of data transmission volume between the ship and the shore can we expect?β
Studies with regard to data volume revealed the following rough relations between number of signals to be recorded and transmitted data volume:
Number of logged signals | Logging rate | Raw export data volume per 24h | Compressed export volume per 24h | Estimated data volume per month |
---|---|---|---|---|
300 | 60 s | 3.5 MB | 0,7 MB | 21 MB |
300 | 10 s | 21 MB | 4.1 MB | 123 MB |
1000 | 10 s | 69 MB | 14 MB | 420 MB |
1000 | 1 s | 691 MB | 138 MB | 4140 MB |
Example calculation:
- 240 signals for tank content measurement of 60 tanks (height, volume, mass, density), 240 data points logged once per minute
- 40 nautical signals e.g. GPS, wind, speed, heading... = 40 data points logged once per minute
- 50 main engine signals e.g. RPM, torque, power, 12 x TC RPM, 12 x exhaust temp, 12 x something else. About 50 data points logged every 10s
- 20 high resolution data points of the vessel motion state. About 20 signals logged once per second.
Total data file size:
- Volume uncompressed: 20MB/day
- Volume compressed: 4MB/day
Depending on the export interval, an additional transmission overhead needs to be considered (see "How big is the transmission overhead?").
For the above mentioned example transmitted every 6 hours a data volume of 1 MB, the transmission overhead would be less than 10%.
note
What happens when the integrity test fails? Will the entire file content, in which the integrity test has beenβ
failed, be transmitted again? Yes. But this case only happens to files where the transmission cannot be continued. In the "normal" case of a connection failure, the transmission will continue where it got interrupted. Only in case of a high amount of package-losses, the transmission will be repeated completely. A bandwidth detection prevents any transmission, in case the risk of a weak transmission is detected. Therefore, the complete repetition of a transmission is only the last resort. The maximum amount of transmission attempts can be determined per file. Thus, expensive and endless attempts are avoided. When a better connection quality is detected, the transmission can be restarted via a web interface directly on board.
note
What is the origin of the timestamp for time series data?β
The time series data is logged by Hoppe Marines iDB server, which uses the master time on the HOMIP2 as the time source. This time source is configured on the HOMIP2 display and is independent of the date/time of the operating system. If the customer has a GPS receiver on board, the HOMIP2 can also use the GPS time signals as the time source. GPS time signals are preferred as time source, since they provide a well established times standard. Additionally, chances to enter the date/time incorrectly are minimized.
note
How big is the transmission overhead in addition to the transmitted amount of data?β
Encryption, connection set-up, etc. have been optimized already. However, one should always consider the following: The smaller the single exported data blocks are (i.e. the shorter the export intervals are), the bigger the relative overhead is. This can be seen similar to a letter that is always 80ct., no matter if it is one A4 page or three A4 pages. The following image indicates the measuring results of the interrelation between payload size and transmission overhead. It is well indicated that the relative overhead becomes smaller in case of encryption and transmission of bigger file sizes.
note
Why is time correctness essential for meaningful high quality dataβ
Our goal is to log and provide ship operation data accurately and consistently. This allows in retrospective to identify the exact date and time at which an event occurred or what the signal value was at that moment. This cannot be done if the time source is not reliable. As a worst case example, if the time has changed back, duplicate data can occur for the same given time range. If this occurs, meaningful report can not be generated.