Current Location:Home > Solution > Security
Design and Implementation of a QEMU Based Video Surveillance System

Embedded system is a complex system engineering combining software and hardware, customization and productization. It involves knowledge and skills in semiconductor, electronic information, signal processing, software engineering and other disciplines, and has been widely used in consumer electronics, Arms industry, Internet of Things, home security, government education and many other fields. With the maturity of Embedded operating system applications and technologies, products are more diversified, The iteration speed is faster.


When designing and developing an embedded product, it is common practice in the industry to choose a certain development board, customize the required CPU model, RAM, ROM, power module, display module, and other hardware environments according to the product requirements, and select the operating system platform and various development tools used on the platform, such as compilers, linkers, etc. Compile image files that run on the target platform (development board) according to the manufacturer's hardware specifications and software instructions, and burn them onto the target platform using burning tools such as JTAG. After completing the construction of the software and hardware environment, we will enter the development phase of embedded applications.


However, in the early stages of embedded system development, there are problems such as high hardware economic costs, complex and cumbersome environment construction, long software development cycles, difficult problem localization, and high learning costs for learners. Especially for beginners, when encountering hardware problems on the development board, it is difficult to determine the cause of the problem and seek one-on-one guidance to solve it. How to solve these many problems has become an urgent issue for embedded software practitioners to study.


1 QEMU software development

QEMU was originally an open-source simulator developed by French programmer Fabrice Bellard, widely used in virtualization and hardware simulation. QEMU can complete user Process simulation and system virtualization simulation. User Process simulation means that QEMU can run binary files compiled by one platform on another platform, such as binary programs of an ARM instruction set. After processing by QEMU's TCG (Tiny CodeGenerator) engine, ARM instructions are converted into TCG intermediate code, and then into the code of the target platform. System virtualization simulation means that QEMU can simulate a complete system virtual machine with its own virtual CPU, chipset, Virtual memory and various virtual external devices, and can present the hardware view that is completely consistent with the physical computer on the operating system and application software running in the virtual machine. QEMU can simulate many platforms, including x86, ARM, MIPS, etc. For example, QEMU can virtualize a ARM architecture family based development board based on x86 architecture PC host to run embedded and application programs.


QEMU supports instruction level simulation of the ARM platform, allowing the target system to run in a simulation environment, just like running in a real physical environment. In the development process of embedded systems, by establishing a virtual hardware environment, embedded software can run without physical hardware environment, providing a development and testing platform for software developers, greatly improving development efficiency, QEMU can simulate many architectures and hardware boards, including ARM Cortex A9 series Vexpress boards, ARM 64 supported Virt boards, and RISC-V boards. In specific operation, only one personal computer with Linux distribution installed can be used to build the ARM experimental platform with QEMU virtual machine, which has the following characteristics.


This article is based on the Versatile Express development platform of QEMU virtual machine simulation 4-core Cortex-A9, completing the porting of Linux 4.0 and root file system. An embedded video monitor is designed and implemented on the built virtual platform, allowing users to remotely access and pull audio and video streaming media through the network. Through this case study, it is verified that the QEMU based embedded software development method is feasible and effective, and the process is simplified as follows:


1) Establish a development environment on the development platform. Select Ubuntu, CentOS, Fedora and other Linux distribution, download and install GCC Cross compiler and other dependent software, and download and install QEMU;

2) Establish the root file system. Download Busybox for feature tailoring and create a basic root file system;

3) Download the open-source operating system Linux source code and compile it, select version 4.0;

4) Write and compile application programs;

5) Start QEMU to run the kernel, root file system, and execute the application program in QEMU.

Compared to traditional embedded software development methods, QEMU based embedded software development methods have simpler processes, more efficient development, and lower costs.


2. Design of video surveillance system

The video monitoring system is an electronic system or network system, which can check the location through the camera, and transmit the image and sound of the location to the central control system, so as to facilitate the timely detection, recording and disposal of abnormal conditions. It is widely used in security and on-site management in many industries, such as public security, fire protection, transportation, banking, medical treatment, factories, etc. Generally, the front-end camera, transmission, control The main components of the video surveillance system include display and recording. In short, the video surveillance system mainly includes real-time streaming of cameras, device control, and other operations. This article designs and implements a video control system based on QEMU, which provides customized private protocol and RTSP protocol externally, providing system control services for cameras and RTSP audio and video streaming services respectively.


2.1 Central Control System

The central control system is implemented using the platform graphical user interface application development framework QT programming. After the program starts, a thread is created to wait for the video control system to connect. After the video control system starts, USB camera initialization is completed and streaming services are started. After both parties establish a Socket connection, according to the custom private protocol, the central control system obtains the RTSP URL address of the video control system and calls the video playback component to pull audio and video stream data in real-time, achieving video monitoring function.


2.2 Video surveillance system


1) Kernel Environment Preparation

Select the Ubuntu distribution, download and install GCC Cross compiler and other dependent software: sudo apt get install libncurses5 dev gccarm linux gnueabi build essential, download and install QEMU: sudo apt install qemu system arm. Download Busybox for feature tailoring and create a basic root file system. Download the Linux kernel source code and compile it.


2) Writing Applications

RTSP is a Network Control Protocol designed to establish and control streaming media servers in multimedia communication systems. The transmission of streaming media data itself is not the task of RTSP, and in implementation, RTP and RTCP must be combined to achieve true bitstream transmission and control. The protocol division is as follows: RTSP is responsible for establishing and controlling sessions (default port: 554), implemented based on TCP; RTP is responsible for transmitting streaming media data; RTCP is responsible for controlling traffic statistics with RTP. To achieve RTSP audio and video streaming services for video surveillance systems, it is necessary to implement steps such as image acquisition, image conversion, video encoding, and video distribution.


FFmpeg has powerful functions such as video capture, video format conversion, video encoding, video decoding, etc. It can be used for recording and converting audio and video, and can be converted into data streams. The image acquisition module sets the acquisition method for video4linux2, sets parameters such as resolution, image format, and acquisition frame rate, and opens the camera (/dev/video0) to continuously collect image frames in a loop, and then performs the next step of image conversion and encoding processing. The image conversion module sets the source image format YUV422 and target image format YUV420, and stores them in AVFrame after conversion for use in the next encoding stage. After setting parameters such as resolution, frame rate, image format, and bit rate, the video encoding module adds one YUV420 image from the previous step to the encoder, reads the H.264 video encoding frame, and passes it to the RTSP service module for data distribution.


Live555 is an open source C++library for multimedia streaming that uses open standard protocols (RTP/RTCP, RTSP, SIP). This open source library can read, receive, and process MPEG, H.265, H.264, H.263+, DV or JPEG videos, as well as other audio and video encoding formats, and can also be used to build basic RTSP clients and servers.


Implement RTSP protocol service functions based on live555, including methods such as OPTIONS, DESCRIBE, SETUP, PLAY, PAUSE, TAERDOWN, etc. In live555, RTSP corresponds to one camera's RTSP session corresponding to one ServerMediaSession, and each RTSP session needs to transmit video, audio, and other bitstreams. Each stream corresponds to a ServerMediaSubSession. For video streams, common encoding formats include H.264, H.265, etc. The image of the camera head can be encoded in H.264 format, so H264 VideoFileServerMediaSubSession is used to implement the ServerMediaSubSession. Finally, copy the successfully ported and compiled ffmpeg and live555 binary files to the kmodules shared directory, and start the video monitoring system in the QEMU input box.


3) Launch application

Start one QEMU virtual machine with integrated webcam, set the parameters - device USB host, hostbus=1, hostaddr=3, where 1 and 3 represent the bus and device values, respectively. Simultaneously specify parameters such as hardware platform, number of cores, memory size, kernel image path, shared directory path, etc.: sudo qemu system arm - M vexpress-a. Finally, start the QT software of the central control system and click the "Open" button to successfully pull the real-time monitoring video from the camera.


Copyright © UDU Semiconductor