STM32 video player over network

S

This example presents a simple video streaming(PC) and video player(STM32F746 board). The video streaming solution is designed to be flexible with a simple “threading” architecture that allows customization. Due to the image processing limitations (missing JPEG decoder) of the STM32F746) which acts as a client, the server (linux PC) is responsible to decode the selected media files and transmit them on a simple, yet effective way. The streaming can support 24bit RAW bitmaps (images) of 24-28 frames per second.

Please use the following link to get a part of the source code and the binaries: Video Streaming

Video Streaming Server

The algorithm of the streaming server is trying to keep the procedure simple. The main idea is based on OpenCV feature of exporting video frames and split them into smaller packets to transmit them through UDP protocol.

Each video frame is a RAW bitmap which unfortunately increases the overall required processing power of the server but dramatically decreases the processing requirements of the STM. The following scheme presents an abstraction of the overall process. Each transition (presented below) is a separated thread handled by the main process.

As mentioned before, the application is separated into the following thread classes:

  • VideoToFrames (max 4)
  • FramesToNetwork (max 1)

The application can spawn (depending the current state) at maximum 4 VideoToFrames threads which are responsible to decompose the video into sequential frames and append them in the global frames buffer list of 150 elements max. For example, if the frames buffer is above 130 then there will be only 1 VideoToFrames thread, if the frames buffer is below 130 and above 100 there will be 2 VideoToFrames threads and so on.

The synchronization between the active threads is done through some global volatile variables which determines when each VideoToFrames thread should append the parsed frame at the buffer. As per the FramesToNetwork thread there is no need for any synchronization mechanism because the execution of it is considered independent and the thread splits and transmits the frame patches as long there are available elements in the frames buffer.

Some basic constrains are the network activity which may cause unexpected delays of the UPD packets. In some case there will be packets dropping due to the router limitation or blocking in case of firewall (may be detected as DDOS attach). For that reason, it is recommended to use a P2P physical connection between the Linux PC and the STM32 board

 

STM32F7 Video Player

After the transmission through UDP, the client is responsible to reconstruct the image. Each packet is separated into two parts:

  • Header
  • Payload(patch)

The header contains information about the position of the patch into the image as well as the frame number. Using the STM’s DMA memory to peripheral the client copies the payload at the appropriate position of the screen’s memory.

In detail, when the stm32f7 receives a UDP packet (which contains an image patch as well as a header) the image reconstruction algorithm performs the following steps.

  1. In case of packet frame number == currently displayed frame number, then the algorithm copies the payload of the packet at the appropriate position the screen’s memory according to its position.
  2. In case of packet frame number < currently displayed frame number, then the received packet is discarded.
  3. In case of packet frame number > currently displayed frame, then the currently displayed frame number is updated and the algorithm continues with the 1 step.

According to UDP protocol there is no insurance that the packets will arrive, as well as the order that they will be received may be different. To solve this issue, the information encapsulated in the header of each packet used by the algorithm of the image reconstruction provides a mechanism to avoid unordered received packets (UDP) from being displayed in wrong positions as well as to discard packets that are late and could produce a video back-stepping effect.

 

Testing Scenario

The testing was performed using a simple natural video (no artificial 3D movie) in AVI format. The STM32F7 was directly connected to the PC and streaming server was the only active user application running in the system. The average FPS of the streaming procedure was 21 frames per second.

Disclaimer: The present content may not be used for training artificial intelligence or machine learning algorithms. All other uses, including search, entertainment, and commercial use, are permitted.

Leave a Reply to Алексей Cancel reply

Categories

Tags