Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all 64 articles
Browse latest View live

Unable to use VideoCapture to access axis m1013 (Java)

$
0
0
Hello, I am trying to use VideoCapture in Java to access the stream from an Axis m1013. The code does not return me any errors, but it will not affirm that the camera stream has been opened either. I tried using it to grab a single jpeg from the stream, rather than accessing the video, but this seems to not work, either. vc.isopen() does not return true. My code as of right now is: package camTest; import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.highgui.Highgui; import org.opencv.highgui.VideoCapture; public class CamTestMain { public static void main(String[] args) { VideoCapture vc; System.loadLibrary(Core.NATIVE_LIBRARY_NAME); vc = new VideoCapture(); try { vc.open("http://axis-camera-223-catapult.lan/mjpg/video.mjpg"); System.out.println("no err"); Mat frame = new Mat(); vc.read(frame); Highgui.imwrite("C:/dmp.jpg", frame); } catch (Exception e) { System.out.println("error opening cam. details: "); e.printStackTrace(); } if(vc.isOpened()) System.out.println("cam opened"); } } I'm using OpenCV version 2.4.13, and JDK 8u101. I have verified that the rescources exist through a browser using the URL HTTP:///mjpg/video.mjpg. I have installed ffmpeg, but i don't know any way to verify it works properly or is installed correctly. I can access the camera configuration through a browser, also, and get a jpeg image through HTTP:///jpg/image.jpg. Any help would be much appreciated, as I've been working at this for quite some time. Thanks, JT

Videowriter recording MP4 + X264 (OpenCV 3.1)

$
0
0
I'm trying to record a video with the X264 codec and MP4 format. The resulting video recording works, and can be played with a videoplayer, however I receive following warning. OpenCV: FFMPEG: tag 0x34363258/'X264' is not supported with codec id 28 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x00000021/'!???' I've been looking in the OpenCV code and it looks that a fallback mechanism is used (and another tag is used instead). Can someone tell me why this is happening, and how I can remove the warning. https://github.com/opencv/opencv/blob/master/modules/videoio/src/cap_ffmpeg_impl.hpp#L1918 Thanks! Cédric

Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu

$
0
0
I am working with a video stream (no audio) from an ip camera on Ubuntu 14.04. Also i am a beginner with Ubuntu and everything on it. Everything was going great with a camera that has these parameters (from FFMPEG): Input #0, rtsp, from 'rtsp://*private*:8900/live.sdp': 0B f=0/0 Metadata: title : RTSP server Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbc But then i changed to a newer camera, which has these parameters: Input #0, rtsp, from 'rtsp://*private*/media/video2':0B f=0/0 Metadata: title : VCP IPC Realtime stream Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc My C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture. VideoCapture vc; vc.open(input_stream); while ((vc >> frame), !frame.empty()) { *do work* } With the new camera stream i get errors like these (from ffmpeg): [h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38 [h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572) [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44 [h264 @ 0x7c6980] error while decoding MB 0 44, bytestream (4933) [h264 @ 0x7bc2c0] SEI type 25 truncated at 208 [h264 @ 0x7bfaa0] SEI type 25 truncated at 206 [h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 18 [h264 @ 0x7c6980] error while decoding MB 0 18, bytestream (14717) The image sometimes is glitched, sometimes completely frozen. After a few seconds to a few minutes the stream freezes completely without an error. I tried appending `?tcp` to the input stream, but the video still froze without an error after ~10 seconds. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with ./configure --enable-gpl --enable-libx264 Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, doesn't freeze, but sometimes still displays warnings: [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1 [h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1 [rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet [rtsp @ 0x7f834c0008c0] RTP: missed 1 packets [h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame [NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1 Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Maybe i should switch to some other decoder and player? From my research i conclude that i have these options: - Somehow get OpenCV to use the ffmpeg player from another directory, where it is compiled with libx264 - Somehow get OpenCV to use libvlc instead of ffmpeg One example of switching to vlc is [here](http://answers.opencv.org/question/65932/how-to-stream-h264-video-with-rtsp-in-opencv-partially-answered/), but i don't understand it well enough to say if that is what i need. Or maybe i should be [parsing](http://stackoverflow.com/questions/30345495/ffmpeg-h264-parsing) the stream in code? I don't rule out that this could be some basic problem due to a lack of dependencies, because, as i said, i'm a beginner with Ubuntu. - Use vlc to preprocess the stream, as suggested [here](http://stackoverflow.com/questions/23529620/opencv-and-network-cameras-or-how-to-spy-on-the-neighbors). This is probably slow, which again is bad for real time results. Any suggestions and coments will be appreciated.

How to fix opencv H264 decoding error

$
0
0
I use opencv for Face Detection,but when the code running,Picture frames go into a mess,see the picture,I think this error form FFMPEG, and opencv itself (may be the thread problem),Because I just open the camera, do not detect human face, the picture will not mess. ![image description](/upfiles/14080982276660546.jpg) #include "cv.h" #include "highgui.h" #include #include #include #include #include #include #include #include #include #ifdef _EiC #define WIN32 #endif static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); const char* cascade_name ="C:\\Users\\a\\Desktop\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml"; /* "haarcascade_profileface.xml";*/ int main( int argc, char** argv ) { CvCapture* capture = cvCreateFileCapture("rtsp://192.168.1.108:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif"); IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Users\\a\\Desktop\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt2.xml"; //opencv装好后haarcascade_frontalface_alt2.xml的路径, //也可以把这个文件拷到你的工程文件夹下然后不用写路径名cascade_name= "haarcascade_frontalface_alt2.xml"; //或者cascade_name ="C:\\Program Files\\OpenCV\\data\\haarcascades\\haarcascade_frontalface_alt2.xml" input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); // else // capture = cvCaptureFromAVI( input_name ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 10 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); for( i = 0; i < (faces ? faces->total : 0); i++ ) { CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 ); } } cvShowImage( "result", img ); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }

opencv3 videoCapture doesn't work

$
0
0
I've a problem with openCV 3 java wrapper. When i use videocapture, camera.grab() always return false. I see several subjects on internet about this problem. I succeeded to run opencv 2.4 but not with version 3. My environnement : - windows 10 (64b) - java 8u51 (32b) - eclipse mars (32b) So, I test these methods. Env : - Set windows path : D:\Programs\opencv3x\build\x86\vc12\bin - Add opencv_ffmpeg to D:\Programs\opencv3x\build\x86\vc12\bin (in opencv 3, this lib is already in with the good name : opencv_ffmpeg300.dll). Dev env : In eclipse project : - add opencv-300.jar - set the native lib to D:/Programs/opencv3x/build/java/x86 - With this configuration, I can use opencv 3 without problem...but i can't decode video file! Does anyone have a solution on this? Thx.

OpenCV not detecting FFMPEG on Linux Mint

$
0
0
I have been using OpenCV on Windows for a year now and recently transitioned to Linux, Life has not been easy, from loading an already existing `Jar` file to compiling my own. I have manage to compile the `Jar` but having issues playing a video, I get the error `Unable to stop the stream: Inappropriate ioctl for device` from what I understand it is something to do with `Video I/O` when I call the method `Core.getBuildInformation()` it gives me the following output: Video I/O: DC1394 1.x: NO DC1394 2.x: NO FFMPEG: NO avcodec: NO avformat: NO avutil: NO swscale: NO avresample: NO GStreamer: NO OpenNI: NO OpenNI PrimeSensor Modules: NO OpenNI2: NO PvAPI: NO GigEVisionSDK: NO Aravis SDK: NO UniCap: NO UniCap ucil: NO V4L/V4L2: NO/YES XIMEA: NO Xine: NO gPhoto2: NO Meaning `FFMPEG` was not installed but when compiling the `Jar` I configure `CMAKE` like `cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..` and the ouput is: Video I/O: DC1394 1.x: NO DC1394 2.x: YES (ver 2.2.4) FFMPEG: YES avcodec: YES (ver 56.60.100) avformat: YES (ver 56.40.101) avutil: YES (ver 54.31.100) swscale: YES (ver 3.1.101) avresample: YES (ver 2.1.0) GStreamer: NO OpenNI: NO OpenNI PrimeSensor Modules: NO OpenNI2: NO PvAPI: NO GigEVisionSDK: NO Aravis SDK: NO UniCap: NO UniCap ucil: NO V4L/V4L2: NO/YES XIMEA: NO Xine: NO gPhoto2: NO from what I understand from this log `FFMPEG` is installed and detected by OpenCV after which I compile the `jar` with the command `make -j10` and it runs successfully with no single error but even after all this OpenCv does not play video files, but it does work with the `WebCam`....I am really lost and very new to OpenCV in Linux so please if there is anyone who knows what could be the problem help me sort it.

ffmpeg suport on Centos7

$
0
0
I am trying to compile OpenCV with ffmpeg support on Centos7. I installed ffmpeg from the EPEL repository, and installed `yum install ffmpeg ffmpeg-devel`. When I tried to compiled using `cmake -G"Unix Makefiles" -DWITH_FFMPEG=1 .` I get the following output (I am only copying the video ouput section, please let me know if i should add more). > -- Video I/O: -- DC1394 1.x: NO -- DC1394 2.x: YES (ver 2.2.2) -- FFMPEG: NO -- avcodec: YES (ver 56.26.100) -- avformat: YES (ver 56.25.101) -- avutil: YES (ver 54.20.100) -- swscale: YES (ver 3.1.101) -- avresample: YES (ver 2.1.0) -- GStreamer: -- base: YES (ver 0.10.36) -- video: YES (ver 0.10.36) -- app: YES (ver 0.10.36) -- riff: YES (ver 0.10.36) -- pbutils: YES (ver 0.10.36) This is confusing because it seems that cmake detects the libraries but not ffmpeg. The other clue I got is that it seems that cmake fails a test it does at the beginning since it show: >-- WARNING: Can't build ffmpeg test code Looking a bit more I found that this test is done in `./cmake/OpenCVFindLibsVideo.cmake`, where it tries to compile the example `./cmake/checks/ffmpeg_test.cpp`. In my case I found that the variable `FFMPEG_LIBRARY_DIRS` is empty. I do not if this is useful but it is the only clue I found. I am stuck at the moment and I will appreciate any suggestion.

Error opening video file without extension or some formats

$
0
0
I have a video file here /data/out.mp4 and its copy /data/out. I'm on Osx Sierra with Xcode 8. Open video with mp4 extension is ok but get error with no extension or any other format. I tryed opencv 2.4 and latest 3.2 from git. Compiled with ffmpeg release/2.0 and also tryed release/3.0 **With opencv version 3.2:** Python 2.7.10 (default, Jul 30 2016, 19:40:32) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cv2>>> cv2.__version__ '3.2.0-dev'>>> cv2.__file__ '/Users/johndoe/work/myproject/venv/local/lib/python2.7/site-packages/cv2.so' **If video have extension:** >>> v=cv2.VideoCapture('/data/out.mp4')>>> v.get(cv2.CAP_PROP_FRAME_COUNT) 61117.0 **If video have no extension:** >>> v=cv2.VideoCapture('/data/out') VIDEOIO(cvCreateFileCapture_AVFoundation (filename)): raised unknown C++ exception!>>> v.get(cv2.CAP_PROP_FRAME_COUNT) 0.0 My FFmpeg configure: ./configure --prefix=/usr \ --bindir=/usr/local/bin/ \ --shlibdir=/usr/lib64 \ --datadir=/usr/share/ffmpeg \ --incdir=/usr/include/ffmpeg \ --libdir=/usr/lib64 \ --mandir=/usr/share/man \ --extra-cflags='-O2 -g' \ --extra-version=rpmfusion \ --enable-bzlib \ --enable-nonfree \ --enable-libopenjpeg \ --enable-libx264 \ --enable-avfilter \ --enable-postproc \ --enable-pthreads \ --disable-static \ --enable-shared \ --enable-gpl \ --enable-runtime-cpudetect \ --arch=x86_64 My opencv CMAKE: cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/Users/johndoe/work/myproject/venv/local/ \ -D INSTALL_C_EXAMPLES=OFF \ -D PYTHON_PACKAGES_PATH=/Users/johndoe/work/myproject/venv/lib/python2.7/site-packages \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D PYTHON_EXECUTABLE=/Users/johndoe/work/myproject/venv/bin/python \ -D WITH_CUDA=OFF .. I tryed various other formats reencoding my original video (ie: ffmpeg -i out.mp4 out.asf) : >>> v=cv2.VideoCapture('/data/out.avi') VIDEOIO(cvCreateFileCapture_AVFoundation (filename)): raised unknown C++ exception!>>> v=cv2.VideoCapture('/data/out.flv') VIDEOIO(cvCreateFileCapture_AVFoundation (filename)): raised unknown C++ exception!>>> v=cv2.VideoCapture('/data/out.asf') VIDEOIO(cvCreateFileCapture_AVFoundation (filename)): raised unknown C++ exception!

How to build opencv_ffmpeg.dll from patched FFmpeg source

$
0
0
I am trying to write a patch to the FFmpeg API, and I am able to build the FFmpeg library dlls for Windows successfully using the instructions on their website (MinGW/FFmpeg installation guide). However, I'm not sure how to package these various dlls (avfilter.dll, avformat.dll, avcodec.dll, etc.) into the opencv_ffmpeg.dll that the CMake looks for when building OpenCV. I will eventually want to build my custom FFmpeg libraries and OpenCV cross-platform for Linux and Mac OS X, as well. The end game is to have Java-wrapped OpenCV working cross-platform with a modified FFmpeg API. How do I do this?

Video Playback via Microsoft Media Foundation (msmf)

$
0
0
Hi! Have anyone successfully playbacked video using Media Foundation (CV_CAP_MSMF) of VideoCapture on Windows 10? My goal is to use only media foundation and remove ffmpeg dependency for video decoding. ex: VideoCapture cap(fileName, CV_CAP_MSMF); By rebuilding from source of OpenCV 3.0, 3.1, 3.2 (-with_msmf), I always get cap.isOpened() to return false. When digging into the cap_msmf.cpp source, in line 3864, my multiple video(s) can not match this particular MFVideoFormat_RGB24. It makes me wonder if there is a limitation of OpenCV's media foundation implementation. MediaType MT = FormatReader::Read(pType.Get()); // We can capture only RGB video. if( MT.MF_MT_SUBTYPE == MFVideoFormat_RGB24 ) I had spent couple days on investigating this issue, and hope to get some confirmation and experience shared from this forum. Thanks a lot! -Jay

OpenCV conda installation (missing ffmpeg) - Windows

$
0
0
So I managed to install OpenCV 3.1 using conda and Python 3.5 and everything seems to work fine. However when trying to import a video file via ffmpeg I get this: import numpy as np import cv2 cap = cv2.VideoCapture('data\vtest.avi') cap.read() #(False, None) So obviously something is wrong with ffmpeg (when using still images or my laptop webcam works fine). I have tried a couple of things: 1. Install ffmpeg binaries in my environment (works fine separately but OpenCV cannot call it?). 2. I have moved to the bin folder (which is in my path) the dlls from the compiled version in sourceforge: `opencv_ffmpeg310_64.dll` `opencv_ffmpeg310.dll` But neither of the two options worked. Any ideas?

Trying to load a .mp4 Video fails

$
0
0
Hey guys, I've installed ffmpeg and opencv on my linux server, like that: git clone cd FFmpeg ./configure --enable-shared make sudo make install sudo apt-get install build-essential sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev git clone cmake mkdir release cd release cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. make sudo make install I wrote a little Python script: import cv2 cap = cv2.VideoCapture('video.mp4') total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) But I'm not able to load that video and `total_frames` are always 0. I did some research why this doesn't work and found tons of answers tried so many ways to install it, but it didn't worked for me. What did I do wrong? Am I missing some detail? I'm really annoyed and from installing it

opencv_ffmpeg320_64.dll error with mjpeg

$
0
0
Hi i am trying to load mjpeg camera Code is working perfect in opencv2.4 but when i try same code with opencv 3.2 and i get the below error [mpjpeg @ 00000094198e87e0] Expected boundary '--' not found, instead found a li ne of 3 bytes Please can you help const std::string videoStreamAddress = "CAM_URL"; cv::setBreakOnError(true); cv::namedWindow("Example3", cv::WINDOW_AUTOSIZE); cv::VideoCapture cap; cap.open(videoStreamAddress); if (!cap.isOpened()) { cout << "File is not opened!"; exit(-1); } //cout << frame .total()<< endl; int i = 1; Mat frame; for (;;) { try { cout << "showing next frame" << i << endl; cap.read(frame); //cap.read(frame); if (frame.empty()) { cout << "Empty" << endl; break; } else cout << "Frame: " << i << endl; i++; imshow("Example3", frame); cout << "Frame showed" << i << endl; char c = waitKey(30);//without this image won't be shown if (c == 27) break; //if (waitKey(30) >= 27) break; //exception } catch (...) { /* */ //cout << e.msg << endl; // output exception message } } Regards Wael

Specify ffmpeg encoder in VideoCapture

$
0
0
I'm using an Orange Pi with a built in CSI webcam. On the command line with ffmpeg I can get 30+ frames a second no problem using the hardware-accelerated cedrus encoder. From OpenCV, using the v4l2 bindings is much slower - I get about 10 frames per second in the best case. And it's no better if I specify CAP_FFMPEG because it isn't using the hardware-accelerated encoder. Is there a way to either (a) specify an encoder in OpenCV VideoCapture (API suggests no) (b) a straightforward way to pipe output from an ffmpeg process to OpenCV VideoCApture (c) is there a third method I'm missing? thanks philip

License of Ffmpeg for android

$
0
0
Dear, I download the "opencv-3.2.0-android-sdk.zip" file to develop an Android application for commercial. Btw, when i using tool to check the license of my application (include the "sdk" folder of "opencv-3.2.0-android-sdk.zip") then i got warning messages about ffmpeg tool is using. For example: - /sdk/java/javadoc/org/opencv/videoio/Videoio.html - /sdk/java/src/org/opencv/videoio/Videoio.java ... I know that OpenCV include some ffmpeg source code for video processing. My questions are: 1. Are we allowed to use this ffmpeg part for Android commercial application"? 2. If no, how can i remove the part of ffmpeg under "sdk" folder (source code, java doc,..) ? (since my application does not use this function). I appreciate your answer, thank you very much.

Video Stabilization Output Length Change

$
0
0
I'm performing video stabilization in OpenCV 3.2.0 using code from the [videostab sample](https://github.com/opencv/opencv/blob/master/samples/cpp/videostab.cpp) reduced down to simply perform one pass stabilization using OnePassStabilizer with a GaussianMotionFilter. The code was also enhanced to copy the source codec to the output writer. However, after copying the audio track from the original video to the stabilized video with FFmpeg I noticed that the audio wasn't correctly aligned. Further inspection revealed that the stabilized video (3:49.03) is slightly longer than the input video (3:47.05). The OpenCV code copies the source FPS to the output FPS, however, when comparing the frame rate in VLC the original video appears as 30.303030 while the stabilized video appears as 30. Why is the stabilized video slightly longer than the original? Why is the stabilized video frame rate rounded down? FFmpeg information for input video: $ ffmpeg -i original.mp4 ... Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'original.mp4': Metadata: major_brand : isom minor_version : 0 compatible_brands: isom3gp4 creation_time : 2016-02-25T14:09:56.000000Z Duration: 00:03:47.05, start: 0.000000, bitrate: 4593 kb/s Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 4498 kb/s, SAR 1:1 DAR 16:9, 30.30 fps, 30 tbr, 90k tbn, 180k tbc (default) Metadata: creation_time : 2016-02-25T14:09:56.000000Z handler_name : VideoHandle Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default) Metadata: creation_time : 2016-02-25T14:09:56.000000Z handler_name : SoundHandle At least one output file must be specified FFmpeg information for stabilized video: $ ffmpeg -i videostab.mp4 ... Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'videostab.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf56.1.0 Duration: 00:03:49.03, start: 0.000000, bitrate: 2449 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 2446 kb/s, 30 fps, 30 tbr, 30 tbn, 60 tbc (default) Metadata: handler_name : VideoHandler At least one output file must be specified

OpenCv 3.20 for Android. Videowriter and 3rd party libs (ffmpeg)

$
0
0
OpenCv3.20 for Android only allows (for what I have tried and seen in some posts) videowriter to encode .AVI files with fourcc "MJPG" code. Any other options that I have tried result in .isOpened() returning false. I am confused as I don't know if the official 3.20 version for android is built with 3rd party libs as ffmpeg. I see that there are options in build process to include this and other 3rd party libs. My question is: Assuming that it is not included in the official release, if I get to rebuild opencv3.2 with ffmpeg, does it mean that videowriter will internally be able to use them? Or it will be just be just an extension of the lib with its own API but not tied internally to OpenCv? Any hint will be welcome since I am quite lost. I prefer to ask because instead of trying directly because I am not agile in this (building process) and would like to be sure before spending time with trial and errors

FPS and TBR

$
0
0
I've webm video file, which fps is 1k, and tbr 29.97 ffprobe video.webm ... Stream #0:0: Video: vp9 (Profile 0), yuv420p(tv, progressive), 1280x720, SAR 1:1 DAR 16:9, 1k fps, 29.97 tbr, 1k tbn, 1k tbc (default) OpenCV VideoCapture's property CAP_PROP_FPS is 1k, and CAP_PROP_FRAME_COUNT 123857. Real frame count when reading video is, however, 3712, so real fps is tbr: 1000 *(3712/123857) = 29.97 Is video file corrupted, or can I do something in OpenCV to get the real fps (and framecount without reading the whole video).

Streaming a webcam at 60 fps

$
0
0
hi everybody i have a roblem with capturing a webcam (logitech c922) video stream. the webcam provides a stream at 60fps at a resolution of 720p. right now i initialize a capture confert it from BGR2toGray send the frame to an ImageView object. here is my code. public void startCapture(ImageView realView, int frameWidth, int frameHeight){ moveCoordSystemToCenter(frameWidth/2, frameHeight/2); if (!this.capture.isOpened()){ capture.open(0); int fourcc = VideoWriter.fourcc('M', 'J', 'P', 'G'); capture.set(Videoio.CAP_PROP_FOURCC, fourcc); capture.set(Videoio.CAP_PROP_FRAME_WIDTH,1280); capture.set(Videoio.CAP_PROP_FRAME_HEIGHT,720); System.out.println(capture.get(Videoio.CAP_PROP_FRAME_WIDTH)); TimerTask frameGrabber = new TimerTask(){ public void run(){ long time = System.currentTimeMillis(); Image tmp = grabFrame(); Platform.runLater(new Runnable(){ int Counter = 1; long res = 1; public void run(){ realView.setImage(tmp); res = res + System.currentTimeMillis()-time; System.out.println("Fps = " + 1000 / (res/Counter)) ; Counter++; } }); } }; this.timer = new Timer(); timer.scheduleAtFixedRate(frameGrabber, 0, 1); // has no effect } else { if (this.timer != null){ this.timer.cancel(); this.timer = null; } this.capture.release(); } } private Image grabFrame() { Image imageToShow = null; Mat frame = new Mat(); if ( this.capture.isOpened()){ try { this.capture.read(frame); if (!frame.empty()){ imageToShow = mat2Image(frame); } } catch (Exception e){ } } return imageToShow; } private Image mat2Image (Mat frame){ MatOfByte buffer = new MatOfByte(); Mat frameGray = new Mat(); Imgproc.cvtColor(frame, frameGray, Imgproc.COLOR_BGR2GRAY); Imgcodecs.imencode(".png",frameGray, buffer); return new Image (new ByteArrayInputStream(buffer.toArray())); } i use opencv3.0 Java, Eclipse neon, JavaFx my problem is i can´t get more than 30 fps out of this Thing i Need the 60 fps for my application. to set the codec to M J P G brings up speed from 15 fps to 30fps and MJPG is the only Codec which holds my resolution all others fall back to 640 / 480 so i Play with some formats of video- and Picture codecs stuff. Jpg an Png is simular stable at 30 fps bmp is a little faster but not so stable 28- 36 fps i already tried to set the fps via capture.set(CAP_PROP_FPS,60) which has no effect for values above 30. I read some stuff about ffmpeg but i don´t understand how the pipeline should work. can anyone help. greetz

How to create a Python AWS Lambda zip with OpenCV + FFmpeg?

$
0
0
On my desktop I have FFmpeg, OpenCV compilated right, and works well. The function **load a video** from S3 and run a model that **extract some frames** and store them in another **S3 Bucket**. But I have to deploy a conda env with my **lambda function** and packages with **OpenCV** without **FFmpeg**, but I need it to opencv mp4 video file.

python-project/ - cv2/ - numpy/ - lambda_handler.py How can I setup the opencv that enable videos on OpenCV? I tried to compile in another desktop but none. I have the same question on [Stack Overflow](https://stackoverflow.com/questions/44318020/how-to-create-a-python-aws-lambda-zip-with-opencv-ffmpeg)
Viewing all 64 articles
Browse latest View live