Monday, September 17, 2018

Configuring Orthanc with Postgres backend with a network data directory

So we hve configured Orthanc with a Postgres backend. To support a large-scale data store, we mapped a network directory as the data directory of Postgres. Then we configured Orthanc to have Postgres as its backend data store, instead of its default SQLite backend, using the Postgres plugin. There is also an option for a MySQL/MariaDB backend, which we found not stable with MySQL in a network directory.

However, since we have the configured Postgres in a network directory, we have to make sure everything is running fine. Unfortunately, when we reboot, often the network directory does not mount on its own. Therefore, despite our configuration to start Postgres and Orthanc at the boot time, they both fail.

Data directory unaccessible → Postgres fails to start. Postgres failed to start → Orthanc fails to start.

We have to configure the below services in Centos, following the same order.

1) Postgresql
$ sudo systemctl start postgresql

$ sudo systemctl enable postgresql

$ sudo systemctl status postgresql

● postgresql.service - PostgreSQL database server
   Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-17 14:46:41 EDT; 11min ago
 Main PID: 2655 (postgres)
   CGroup: /system.slice/postgresql.service
           ├─2655 /usr/bin/postgres -D /opt/pacs/postgres -p 5432
           ├─2657 postgres: logger process
           ├─2701 postgres: checkpointer process
           ├─2702 postgres: writer process
           ├─2703 postgres: wal writer process
           ├─2704 postgres: autovacuum launcher process
           ├─2705 postgres: stats collector process
           ├─2754 postgres: postgres orthanc ::1(48534) idle
           └─2755 postgres: postgres orthanc ::1(48536) idle

Sep 17 14:45:59 HOST.NAME systemd[1]: Starting PostgreSQL database server...
Sep 17 14:45:59 HOST.NAME pg_ctl[2652]: pg_ctl: another server might be running; trying to start server anyway
Sep 17 14:46:41 HOST.NAME systemd[1]: Started PostgreSQL database server.


2) Orthanc
$ sudo systemctl start orthanc

$ sudo systemctl enable orthanc

$ sudo systemctl status orthanc

● orthanc.service - Orthanc DICOM server
   Loaded: loaded (/usr/lib/systemd/system/orthanc.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-09-17 14:47:14 EDT; 8min ago
     Docs: man:Orthanc(1)
           http://www.orthanc-server.com/
 Main PID: 2753 (Orthanc)
   CGroup: /system.slice/orthanc.service
           └─2753 /usr/sbin/Orthanc /etc/orthanc/orthanc.json

Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.399045 ServerContext.cpp:167] Reloading the jobs from the last execution of Orthanc
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.399776 JobsEngine.cpp:281] The jobs engine has started with 2 threads
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.400023 ServerContext.cpp:293] Disk compression is disabled
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.400050 ServerIndex.cpp:1437] No limit on the number of stored patients
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.400490 ServerIndex.cpp:1454] No limit on the size of the storage area
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.400995 LuaContext.cpp:103] Lua says: Lua toolbox installed
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.403966 main.cpp:848] DICOM server listening with AET BMIPACS on port: 4242
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.404382 MongooseServer.cpp:1087] HTTP compression is enabled
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.405872 MongooseServer.cpp:1001] HTTP server listening on port: 8042 (HTTPS encryption is disabled, remote access is allowed)
Sep 17 14:47:15 HOST.NAME Orthanc[2753]: W0917 14:47:15.405915 main.cpp:667] Orthanc has started






To clean the data from Orthanc entirely

 

The easy way is to drop the Orthanc database.

mytest=# drop database orthanc;
ERROR:  database "orthanc" is being accessed by other users
DETAIL:  There are 2 other sessions using the database.

Yes, first we need to stop Orthanc!

[root@researchpacs postgres]# systemctl stop orthanc

Now drop the database. 
mytest=# drop database orthanc;
DROP DATABASE
mytest=#

Create the database again.
 
mytest=# create database orthanc;
CREATE DATABASE

Now you may start Orthanc again!

[root@researchpacs postgres]# systemctl start orthanc

Tuesday, September 11, 2018

Configuring Kong API Gateway for your Service Endpoints

Kong provides a complete documentation on its installation and a quick-start guide to start using it, such as configuring a service in Kong

In this post we will briefly look into configuring Kong using its Docker container as an API gateway for your backend services. This document uses Kong version 0.14.0-alpine, which is the current latest version of Kong. Kong can be configured with Postgres or Cassandra backend for its persistent storage. Here we will configure Kong with Postgres.

Install and Start Kong with Postgres

You have 2 options. If you want to get it done quick, I recommend choosing the option #2 - installing via Docker containers.

1. Download Kong's native installation for your operating system

Download and install Kong for your respective operating system 

Configure with Postgres:
$ psql -U postgres
postgres=# CREATE USER kong; CREATE DATABASE kong OWNER kong;


Run the Kong migrations:
$ kong migrations up

Start Kong
$ kong start

You may choose to start with verbose logs:
$ kong start -vv

You may need to create a kong configuration file to load Kong with custom configurations:
$ sudo mkdir /etc/kong

$ sudo touch /etc/kong/kong.conf

Now your Kong is running. Confirm that by,

 

2.  Install Kong via Docker containers.

We have our own "kong-ldap" repository with scripts that will install and configure Kong with Postgres in a container. 

$ cd kong-ldap

$ sh buildRun.sh

Optionally, you may also use these scripts to configure OpenDJ LDAP directory as a Docker container a well, connected to Docker. However, the commands are commented out - therefore you will need to uncomment the relevant commands in the script buildRun script to get it working.

Kong has two interfaces. One is the user-facing interface and the other one is the admin interface. The admin interface by default listens to the port 8001 and should not be exposed to the public. The admin interface is used by the administrators to create and configure the routes to the services in the Kong API gateway. The user-facing interface by default listens to the port 8000. It is exposed to the public, and the users can consume the services defined by the admin previously using the user-facing interface. The user-facing interface is completely separated by Kong from the admin interface.

Configure Kong for your Services

Configuring Kong as an API gateway for your services is a 2-step procedure, starting from Kong 0.14. Previous versions of Kong provided a unified approach through its "api" objects, which is current depreciated and replaced by two entities known as "services" and "routes".

Make sure that you have Kong up and running. Then,  execute the below two commands from the server that hosts Kong.

First, you need to define a "service" in Kong to each of your service/API groups (i.e., your backend applications or web services).


1) Create a Kong Service using the Kong Admin API

$ curl -i -X POST --url http://localhost:8001/services/ --data 'name=radiology' --data 'url=http://172.20.11.223:9099/services/v4/TCIA/query/'

Above, we assume your web application is hosted in http://172.20.11.223:9099/  and it has a set of services under the base path /services/v4/TCIA/query/. We pass the complete url of the backend of the original service deployments through the "url" flag, as shown above. We create a service named "radiology" in Kong for these services, using the Kong admin interface for the creation of services. Here, we assume Kong is hosted in localhost, with its admin interface using the default port option of 8001. /services/ let you create/modify the service definitions in Kong.


Second, you should add a "route" to the "radiology" service that you created.

2) Add a Route to the Kong Service

$ curl -i -X POST --url http://localhost:8001/services/radiology/routes --data 'paths=/radiology'

 As shown above, we use the KONG-ADMIN-INTERFACE/services/SERVICE-NAME/routes to configure the routes to the service that we defined above. Here the SERVICE-NAME is "radiology" as we defined above in the previous step. The "paths" flag indicate the paths that Kong should match to the service that we defined in the previous step.


Accessing your services via Kong

Now, you may access your services via Kong
http://172.20.11.222:8000/radiology/getImage
the same way you access it directly
http://172.20.11.223:9099/services/v4/TCIA/query/getImage

Here, we assume that you have deployed Kong in a server with a public address http://172.20.11.222 and your services are in http://172.20.11.223.

You may of course do additional tasks such as rate limiting, request/response transformations, logging, and analytics with the Kong API Gateway. You may refer to the Kong documentation on pointers to achieving this.

Friday, August 24, 2018

Export Confluence HTML to Github Wiki

Confluence allows its HTML pages to be exported in a single zip. There is a Confluence-to-Github-Markdown tool that allows automatic export of these HTML files to markdown .md files with a single command from the core directory:
$ confluence-to-github-markdown

Now you can create separate pages in the GitHub wiki with these md files. However, there are a certain limitations that need to be manually handled to complete the process.

1. Image. GitHub wiki does not support uploading images. Images are posted via URLs in the wiki. Therefore, currently we need to upload the images to a public location and access the URL from there.

2. Metadata. The tool also converts metadata (which is actually good I think) to md. We need to manually remove them wherever unnecessary instead of showing them as plain text in the GitHub wiki.

3. The converter tool creates the name of the md file from the html title. If you have ' or any special characters in the title of your html, it will fail to convert and throw js errors. To fix, change the title header from something such as "Rakshak’s RESTful API" to "Rakshak RESTful API" before executing the converter tool.

4. Internal links (links to named anchors in the markdown) are not handled by the converter.  The anchors must be manually added.

5. Links to other pages in the confluence wiki are preserved as it is, rather than having them as a relative link. Therefore they need to be replaced by the respective new link in the markdown wiki.

6. Video embedding is not supported in Markdown. Therefore, the converter tool simply ignores any video files embedded in the HTML. To fix this, in the final wiki pages, manually add the videos as a simple HTML links as in, [screencast](https://www.youtube.com/embed/S8juo0Dx68I).

Friday, August 17, 2018

Running Gravitee API Gateway

Gravitee running

Configuring and Starting Gravitee API Managemant Gateway

First install and start the dependencies:

1) Mongo
$ mongod

2) Elastic search
$ cd elasticsearch-6.3.2/bin/
$ ./elasticsearch




Starting Gravitee API Managemant Gateway
Download and extract the project zips of graviteeio.

First, Start the API Gateway
$ cd graviteeio-full-1.18.1/graviteeio-gateway-1.18.1/bin
$ ./graviteeio

Now accessing http://localhost:8082/ should give you the message:
No context-path matches the request URI.


Configure graviteeio-gateway-1.18.1/config/gravitee.yml
By default, the gravitee data is stored in the mongo database gravitee.
You can change these default configurations.


Then, start the Managemant API.
$ cd graviteeio-full-1.18.1/graviteeio-management-api-1.18.1/bin
$ ./gravitee

Now you can access http://localhost:8083/management/apis/ which should give an empty response [ ].

By now, the gravitee database in MongoDB should have the below collections:
events, ratelimit - for the API gateway
audits, metadata, roles, views - for the Management API

More configuration details such as using an LDAP provider and configuring OAuth2 Authentication can be found from the official documentation.


Next, start the Portal.
$ cd graviteeio-full-1.18.1/graviteeio-management-ui-1.18.1

You have the choice of starting the Portal with Python3 or Node.

Let's go with Python3.
$ python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...

Now you can access the portal from http://localhost:8000/#!/


You may further configure the portal to fit your requirements.


Configuring and Starting Gravitee Access Managemant

First, start the gateway
$ cd gravitee-am-gateway-standalone-2.0.4/bin/
$ ./gravitee

Now you should get an empty response to http://localhost:8092/
Now, run the AM Managemant API.

$ cd gravitee-am-management-api-standalone-2.0.4/bin/
$ ./gravitee

Now, accessing http://localhost:8093/management/domains/ should give you an empty response.


Finally, let's start the last component of our ecosystem.
The AM Managemant UI, which is a client-side Angular application.

The directory of the Managemant UI is at,
$ cd gravitee-am-webui-2.0.4/

You will need an HTTP server such as Apache or Nginx. Then you may configure the Management UI following the documentation.

Wednesday, August 1, 2018

The Atlanta Algorithm

As with many other cities in the US, Atlanta has a public transportation network (known as MARTA), which is a joke. In fact, the metro is ok. But my issue is with the buses. Unfortunately, most of Atlanta is not also walkable. Lack of proper sidewalks in the residential neighborhoods. Life in Europe has taught us that it is not necessary to drive, and it is ok to use the public transport. :D Given this state, we choose to live close to my university. 20 minutes by walk.


There is also a bus stop close to both my apartment and my lab. A bus (#6) passes by every 30 minutes, connecting my home to my lab in the university. I usually walk to my lab, and return, by foot. But since I have a free monthly public transportation pass from the university, I decided to use it.


Before leaving home, I check the app to confirm the arrival time of the bus. However, the timing is not accurate. The bus usually comes with a time interval [-3:5], that means 3 minutes earlier to 5 minutes later. But now for the third time in a row, the bus came 10 minutes late. To ensure I do not miss the bus, I always arrive 3 - 4 minutes earlier. Therefore, I leave home 7 minutes earlier than the bus' arrival time at the bus stop (since I need to walk a bit to reach the bus stop).


Let's look at one specific example: Today, I left home at 7:43 to catch the bus that would arrive at 7:50. I reached the bus stop at 7:46. The bus eventually came at 8:10. Then, the bus reached the destination in a few minutes, and I entered the lab at 8:18. If I walked, I would have reached the lab at 8:03. The bus cost me 15 minutes extra.


In a regular/successful day, this would go like this. I leave home at 7:43. The bus comes at 7:54, and I arrive at the lab at 8:01. If the bus reaches my bus halt earlier than usual, the bus driver stops in the bus stop before the Emory village for a few minutes to make sure the bus does not proceed to the next stop too early?!?!?!. So it never happened that I arrived before 8:01 when I use the bus. So I can conclude that using the bus costs me 18 minutes in the best case and 35 minutes in the worst case. On the other hand, if I walk, I take 20 minutes, a constant time, regardless of rain or shine.



[Update: Aug 2nd]
So I found that "MARTA On the Go" application actually offers a real-time schedule of the bus, unlike Google maps which shows a static schedule. Today I checked the bus time just before leaving home. The bus was on its way. It said "6 minutes late" with the current location. MARTA On the Go does not show the estimated arrival time at each stop with this dynamic information. Rather, it shows the current location of the bus (that means, you should have an idea of the road map, or you need to use it in conjunction with another application such as Google maps) and the current delay. With the 6 minutes delay, the bus can reach my stop in 4 (if the driver manages to catch up) - 8 (if the bus further gets delayed) minutes late, I estimated. With this estimate, I left home at 7:48 and arrived at the stop at 7:50. The bus eventually arrived at 8:02. So it actually got delayed 12 minutes (apparently it got delayed 6 more minutes since I checked). I reached the destination bus stop at 8:12 (13 minutes later than the original time). I arrived at the lab at 8:13. 25 minutes (i.e., 5 minutes later than my walking time). I noticed from the app that the next bus got delayed 8 minutes to reach my starting bus stop. While this app does not solve the inefficiency of the public transport, it certainly mitigates the uncertainty and help plan the timing better.

Every city's public transportation system teaches me some new life lesson. I learned something in Jeju. Now in Atlanta. I like these life lessons - I do not underestimate them. Trust yourself, instead of trusting an external element!


[Update: Aug 24th] MARTA introduced new bus routes and changed the scheduled on the 18th. After that, the "MARTA On the Go" app stopped working properly. Number 6 and 36 buses are not showing up on the app anymore. With this state, the buses also come late (as late as 50 minutes) or early (as early as 3 minutes. :P) in an arbitrary manner, not adhering to any schedule. I decided to stop using the bus and walk to and from work until they fix the app.