This document introduces the features of Pose APIs.
Pose APIs detect people in the given image or video and analyze their pose by extracting each person's nose, eyes, ears, shoulders, elbows, wrists, pelvis, knees, and ankles as key points.
Here are the features of Pose APIs.
|Analyzing image (Single Image Pose Estimation)||Detects people in an image and analyzes their pose by extracting the key points for each person.|
|Analyzing video (Job Submit)||Detects people in each frame of the requested video and extracts key points.|
|Checking video analysis result (Job Retrieval)||Returns the processing status and results of the analyzed work through the Analyzing video API.|
For video analysis, you must use the two APIs asynchronously. Firstly, you need to convert the video into multiple images, and then analyze the person's pose in each image using the 'Analyzing video API'. After that, you can check the video analysis result through the 'Checking video analysis result API'.
With Pose APIs using deep learning technology, you can integrate a variety of brand-new artificial intelligence technologies into image or video-based services. For example,
Kakao VX leverages Pose APIs in the Smart Home Training service to extract a user's joint motion in real time and recommend the correct pose when the user is exercising through precise analysis technology.
Image Source: Kakao VX Smart Home Training
The APIs related to video analysis, the Analyzing video API and the Checking video analysis result API, allows you to request to analyze up to the first 30 seconds of a video with a file size of 50 MB or less without a charge. To analyze a video with a larger size or higher frame rate, partnership arrangements are required.
You can use Pose APIs only with a REST API.