Thanks to Joe Davies (@joewdavies) for providing the image.
sx=dim(tiles[[1]])[1]# in practice, look at all tiles and do the gymnastics as necessary sy=dim(tiles[[1]])[2]combined=Image(NA_real_, dim =c(2*sx, 2*sy, 3), colormode ="color")combined[1:sx , 1:sy, ]=tiles[[1]]combined[(sx+1):(2*sx), 1:sy, ]=tiles[[2]]combined[1:sx , (sy+1):(2*sy), ]=tiles[[3]]combined[(sx+1):(2*sx), (sy+1):(2*sy), ]=tiles[[4]]display(combined)
Some heads are cut off due to misclassification of bright-colored neck as sky \(\to\) small objects. Refine: eliminate such small objects. Bird bodies without necks and heads are good enough for us, for now.
Use EBImage::readImages to read into 4D array: \(n_x\times n_y\times n_{\text{colors}}\times n_{\text{timepoints}}\).
library("EBImage")frames=dir("frames", full.names =TRUE)mov=readImage(frames[1:500])# only 1:500 to save time/space, good enough for demodim(mov)# [1] 1280 720 3 500
Apply Optical Flow—basically, simple linear algebra / analysis—to detect and measure local velocities of image content
Try all possible translations of grus2 and find the one that leads to maximal overlap (correlation) with grus. imagefx::xcorr3d does this efficiently using FFT.
Full code for the bird murmuration example
# I first tried to download the video file withyoutube-dl"https://www.youtube.com/watch?v=eakKfY5aHmY"# but this resulted in the error message also reported here https://stackoverflow.com/questions/75495800/error-unable-to-extract-uploader-id-youtube-discord-py# So I followed the top-voted reply there, and ran python3-m pip install --force-reinstall https://github.com/yt-dlp/yt-dlp/archive/master.tar.gzyt-dlp"https://www.youtube.com/watch?v=eakKfY5aHmY"# The video has 25 frames per second. Some of the interesting segments are: 0:18-0:31, 1:21-1:33, 1:34-1:57, 2:10-2:32, 3:34-3:46# I used the following to extract the frames from time period 1:35 - 1:57.ffmpeg-ss 00:01:35 -t 00:01:57 -i resources/eakKfY5aHmY.mp4 frames/murm-%04d.png
Read the frames (png files) produced by ffmpeg
frames=dir("frames", full.names =TRUE)frames=frames[1:500]mov=readImage(frames)print(object.size(mov), unit ="Gb")movg=mov[,,1,]+mov[,,2,]+mov[,,3,]colorMode(movg)="grayscale"
Optical flow analysis: manually divide the image into overlapping squares on a grid, centered around cx, cy, of side length 2*epsilon. Within each of them, for each time point, compute the flow vector fvec.
stride=30epsilon=40time=1:dim(mov)[4]# Instead of the 3 nested loops and fvec array, could also also use dplyr and a tidy tibble, depending on taste. cx=seq(from =epsilon, to =dim(movg)[1]-epsilon, by =stride)cy=seq(from =epsilon, to =dim(movg)[2]-epsilon, by =stride)fvec=array(NA_real_, dim =c(4, length(cx), length(cy), length(time)-1))for(itinseq_len(length(time)-1)){im1=movg[, , time[it]]im2=movg[, , time[it]+1]for(ixinseq(along =cx)){sx=(cx[ix]-epsilon+1):(cx[ix]+epsilon)for(iyinseq(along =cy)){sy=(cy[iy]-epsilon+1):(cy[iy]+epsilon)xc=imagefx::xcorr3d(im1[sx, sy], im2[sx, sy])fvec[, ix, iy, it]=with(xc, c(max.shifts, max.corr, corr.mat[nrow(corr.mat)/2+1, ncol(corr.mat)/2+1]))}}}save(fvec, file ="resources/fvec.RData")
R interface to the Bio-Formats library by the Open Microscopy Environment (OME) collaboration for reading and writing image data in many different formats, incl. proprietary (vendor-specific) microscopy image data and metadata files.
Zarr and Rarr
The Zarr specification defines a format for chunked, compressed, N-dimensional arrays. It’s design allows efficient access to subsets of the stored array, and supports both local and cloud storage systems. Zarr is experiencing increasing adoption in a number of scientific fields, where multi-dimensional data are prevalent.