Your journey into facial blood-flow analysis starts here!
This chapter walks through dfxdemo - a simple Python-based demo that demonstrates how to use the DeepAffex Library and DeepAffex Cloud API.
dfxdemo can extract facial blood-flow from a video file or webcam, send it to the DeepAffex Cloud for processing and display the results. It can also be used to display historical results. In this chapter, we will focus on video files.
To begin, please clone the demo from its GitHub repo and follow the instructions in the README.
dfxdemo has top-level commands that roughly correspond to the way the DeepAffex Cloud API is organized. It uses dfx-apiv2-client-py to communicate with the Cloud.
Authentication on the DeepAffex Cloud API uses JSON Web Tokens. Each token contains information pertaining to the current access request. On the Cloud, a token is mapped against an internal policy manager that specifies what access levels it has. The token policies control access to the various parts of the API. There are three types of tokens:
dfxdemo org register <your_license_key>
dfxdemo user login <email> <password>
The flowchart below shows the steps needed to obtain and renew a token.
By default, dfxdemo stores tokens in a file called config.json. In a production application, you will need to manage all tokens securely.
Note: All the commands below, use the tokens obtained above as illustrated in the demo code.
The DeepAffex Cloud organizes around the concept of Points and Studies.
dfxdemo studies list
dfxdemo study get <study_id>
dfxdemo study select <study_id>
The process of extracting facial blood-flow from a sequence of images and sending it to the DeepAffex Cloud for processing is called making a measurement. dfxdemo uses OpenCV to read individual frames from a video, MediaPipe Face Mesh to track facial landmark features in each frame and libdfxpython (DFX Extraction Library's Python bindings) to extract facial blood-flow. To make a measurement from a video using the selected study:
dfxdemo measure make /path/to/video_file
The facial blood-flow data from a video is sent to the DeepAffex Cloud in fixed duration chunks (5 seconds by default) over a WebSocket. As the measurement progresses, accumulated results are returned over the same WebSocket and displayed. When the last chunk is received by the DeepAffex Cloud, the overall results are computed and returned.
The flowchart below shows the steps needed to make a measurement from a video.
You can also make a measurement using a webcam. This will enable the measurement constraints which guide the user to better position themselves.
Historical measurement results associated with a user and their details can also be retrieved using dfxdemo if your legal agreement with Nuralogix allows saving results:
dfxdemo measurements list
dfxdemo measure get <measurement_id>
These results are also available on the DeepAffex Dashboard which includes a sophisticated graphical display.
In the next chapter, we will discuss the DeepAffex Cloud API in more detail.