The code and video I used for this venture is obtainable on GitHub:
Let’s discover a toy downside the place I recorded a video of a ball thrown vertically into the air. The objective is to trace the ball within the video and plot it’s place p(t), velocity v(t) and acceleration a(t) over time.
Let’s outline our reference body to be the digital camera and for simplicity we solely observe the vertical place of the ball within the picture. We anticipate the place to be a parabola, the rate to linearly lower and the acceleration to be fixed.
Ball Segmentation
In a primary step we have to establish the ball in every body of the video sequence. Because the digital camera stays static, a simple strategy to detect the ball is utilizing a background subtraction mannequin, mixed with a shade mannequin to take away the hand within the body.
First let’s get our video clip displayed with a easy loop utilizing VideoCapture from OpenCV. We merely restart the video clip as soon as it has reached its finish. We additionally ensure to playback the video on the unique body price by calculating the sleep_time in milliseconds primarily based on the FPS of the video. Additionally ensure to launch the sources on the finish and shut the home windows.
import cv2cap = cv2.VideoCapture("ball.mp4")
fps = int(cap.get(cv2.CAP_PROP_FPS))
whereas True:
ret, body = cap.learn()
if not ret:
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
proceed
cv2.imshow("Body", body)
sleep_time = 1000 // fps
key = cv2.waitKey(sleep_time) & 0xFF
if key & 0xFF == ord("q"):
break
cap.launch()
cv2.destroyAllWindows()
Let’s first work on extracting a binary segmentation masks for the ball. This primarily signifies that we wish to create a masks that’s lively for pixels of the ball and inactive for all different pixels. To do that, I’ll mix two masks: a movement masks and a shade masks. The movement masks extracts the transferring components and the colour masks primarily removes the hand within the body.
For the colour filter, we are able to convert the picture to the HSV shade area and choose a selected hue vary (20–100) that accommodates the inexperienced colours of the ball however no pores and skin shade tones. I don’t filter on the saturation or brightness values, so we are able to use the complete vary (0–255).
# filter primarily based on shade
hsv = cv2.cvtColor(body, cv2.COLOR_BGR2HSV)
mask_color = cv2.inRange(hsv, (20, 0, 0), (100, 255, 255))
To create a movement masks we are able to use a easy background subtraction mannequin. We use the primary body of the video for the background by setting the studying price to 1. Within the loop, we apply the background mannequin to get the foreground masks, however don’t combine new frames into it by setting the studying price to 0.
...# initialize background mannequin
bg_sub = cv2.createBackgroundSubtractorMOG2(varThreshold=50, detectShadows=False)
ret, frame0 = cap.learn()
if not ret:
print("Error: can not learn video file")
exit(1)
bg_sub.apply(frame0, learningRate=1.0)
whereas True:
...
# filter primarily based on movement
mask_fg = bg_sub.apply(body, learningRate=0)
Within the subsequent step, we are able to mix the 2 masks and apply a opening morphology to eliminate the small noise and we find yourself with an ideal segmentation of the ball.
# mix each masks
masks = cv2.bitwise_and(mask_color, mask_fg)
masks = cv2.morphologyEx(
masks, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13, 13))
)
Monitoring the Ball
The one factor we’re left with is our ball within the masks. To trace the middle of the ball, I first extract the contour of the ball after which take the middle of its bounding field as reference level. In case some noise would make it by means of our masks, I’m filtering the detected contours by measurement and solely have a look at the most important one.
# discover largest contour akin to the ball we wish to observe
contours, _ = cv2.findContours(masks, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
largest_contour = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(largest_contour)
heart = (x + w // 2, y + h // 2)
We will additionally add some annotations to our body to visualise our detection. I’m going to attract two circles, one for the middle and one for the perimeter of the ball.
cv2.circle(body, heart, 30, (255, 0, 0), 2)
cv2.circle(body, heart, 2, (255, 0, 0), 2)
To maintain observe of the ball place, we are able to use a listing. Every time we detect the ball, we merely add the middle place to the listing. We will additionally visualize the trajectory by drawing strains between every of the segments within the tracked place listing.
tracked_pos = []whereas True:
...
if len(contours) > 0:
...
tracked_pos.append(heart)
# draw trajectory
for i in vary(1, len(tracked_pos)):
cv2.line(body, tracked_pos[i - 1], tracked_pos[i], (255, 0, 0), 1)
Creating the Plot
Now that we are able to observe the ball, let’s begin exploring how we are able to plot the sign utilizing matplotlib. In a primary step, we are able to create the ultimate plot on the finish of our video first after which in a second step we fear about the right way to animate it in real-time. To point out the place, velocity and acceleration we are able to use three horizontally aligned subplots:
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(10, 2), dpi=100)axs[0].set_title("Place")
axs[0].set_ylim(0, 700)
axs[1].set_title("Velocity")
axs[1].set_ylim(-200, 200)
axs[2].set_title("Acceleration")
axs[2].set_ylim(-30, 10)
for ax in axs:
ax.set_xlim(0, 20)
ax.grid(True)
We’re solely within the y place within the picture (array index 1), and to get a zero-offset place plot, we are able to subtract the primary place.
pos0 = tracked_pos[0][1]
pos = np.array([pos0 - pos[1] for pos in tracked_pos])
For the rate we are able to use the distinction in place as an approximation and for the acceleration we are able to use the distinction of the rate.
vel = np.diff(pos)
acc = np.diff(vel)
And now we are able to plot these three values:
axs[0].plot(vary(len(pos)), pos, c="b")
axs[1].plot(vary(len(vel)), vel, c="b")
axs[2].plot(vary(len(acc)), acc, c="b")plt.present()
Animating the Plot
Now on to the enjoyable half, we wish to make this plot dynamic! Since we’re working in an OpenCV GUI loop, we can not straight use the present operate from matplotlib, as this may simply block the loop and never run our program. As an alternative we have to make use of some trickery ✨
The principle concept is to attract the plots in reminiscence right into a buffer after which show this buffer in our OpenCV window. By manually calling the draw operate of the canvas, we are able to pressure the determine to be rendered to a buffer. We will then get this buffer and convert it to an array. Because the buffer is in RGB format, however OpenCV makes use of BGR, we have to convert the colour order.
fig.canvas.draw()buf = fig.canvas.buffer_rgba()
plot = np.asarray(buf)
plot = cv2.cvtColor(plot, cv2.COLOR_RGB2BGR)
Make it possible for the axs.plot calls are actually contained in the body loop:
whereas True:
...axs[0].plot(vary(len(pos)), pos, c="b")
axs[1].plot(vary(len(vel)), vel, c="b")
axs[2].plot(vary(len(acc)), acc, c="b")
...
Now we are able to merely show the plot utilizing the imshow operate from OpenCV.
cv2.imshow("Plot", plot)
And voilà, you get your animated plot! Nevertheless you’ll discover that the efficiency is kind of low. Re-drawing the complete plot each body is kind of costly. To enhance the efficiency, we have to make use of blitting. That is a complicated rendering approach, that attracts static components of the plot right into a background picture and solely re-draws the altering foreground parts. To set this up, we first have to outline a reference to every of our three plots earlier than the body loop.
pl_pos = axs[0].plot([], [], c="b")[0]
pl_vel = axs[1].plot([], [], c="b")[0]
pl_acc = axs[2].plot([], [], c="b")[0]
Then we have to draw the background of the determine as soon as earlier than the loop and get the background of every axis.
fig.canvas.draw()
bg_axs = [fig.canvas.copy_from_bbox(ax.bbox) for ax in axs]
Within the loop, we are able to now change the info for every of the plots after which for every subplot we have to restore the area’s background, draw the brand new plot after which name the blit operate to use the modifications.
# Replace plot information
pl_pos.set_data(vary(len(pos)), pos)
pl_vel.set_data(vary(len(vel)), vel)
pl_acc.set_data(vary(len(acc)), acc)# Blit Pos
fig.canvas.restore_region(bg_axs[0])
axs[0].draw_artist(pl_pos)
fig.canvas.blit(axs[0].bbox)
# Blit Vel
fig.canvas.restore_region(bg_axs[1])
axs[1].draw_artist(pl_vel)
fig.canvas.blit(axs[1].bbox)
# Blit Acc
fig.canvas.restore_region(bg_axs[2])
axs[2].draw_artist(pl_acc)
fig.canvas.blit(axs[2].bbox)
And right here we go, the plotting is sped up and the efficiency has drastically improved.
On this put up, you discovered the right way to apply easy Pc Imaginative and prescient methods to extract a transferring foreground object and observe it’s trajectory. We then created an animated plot utilizing matplotlib and OpenCV. The plotting is demonstrated on a toy instance video with a ball being thrown vertically into the air. Nevertheless, the instruments and methods used on this venture are helpful for all types of duties and real-world purposes! The complete supply code is obtainable from my GitHub. I hope you discovered one thing immediately, comfortable coding and take care!