Qt
Internal/Contributor docs for the Qt SDK. <b>Note:</b> These are NOT official API docs; those are found <a href='https://doc.qt.io/'>here</a>.
Loading...
Searching...
No Matches
cameraoverview.qdoc
Go to the documentation of this file.
1// Copyright (C) 2021 The Qt Company Ltd.
2// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
3
4/*!
5\page cameraoverview.html
6\title Camera Overview
7\brief Camera viewfinder, still image capture, and video recording.
8\ingroup explanations-graphicsandmultimedia
9
10The Qt Multimedia API provides a number of camera related classes, so you
11can access images and videos from mobile device cameras or web cameras.
12There are both C++ and QML APIs for common tasks.
13
14\section1 Camera Features
15
16In order to use the camera classes, a quick overview of the way a camera
17works is needed. If you're already familiar with this, you can skip ahead to
18\l {camera-tldr}{Camera implementation details}.
19For a more detailed explanations of how a camera works, see the following YouTube
20clip.
21
22\youtube qS1FmgPVLqw
23
24\section2 The Lens Assembly
25
26At one end of the camera assembly is the lens assembly (one or
27more lenses, arranged to focus light onto the sensor). The lenses
28themselves can sometimes be moved to adjust things like focus and zoom. They
29might also be fixed in an arrangement for a good balance between maintaining
30focus and cost.
31
32\image how-focus-works.gif "An animation of how focus works"
33
34\image Zoom.gif "An animation of how zoom works."
35
36Some lens assemblies can automatically be adjusted so that
37an object at different distances from the camera can be kept in focus.
38This is usually done by measuring how sharp a particular area of the
39frame is, and then adjusting the lens assembly to find the peak sharpness. In
40some cases, the camera will always use the center of the frame for this.
41In other cases, a camera may also allow this target focus region to be specified.
42Some examples of this feature include:
43\list
44\li Face zoom: Using computer vision to detect and use one or more faces as the
45target.
46\li Touch to zoom: Enabling the user to manually select an area via the preview
47screen.
48\endlist
49
50\section2 The Sensor
51Once light arrives at the sensor, it gets converted into digital pixels.
52This process can depend on a number of things but ultimately comes down
53to two things:
54\list
55\li The length of time conversion is allowed to take. Also known as exposure
56time.
57\li How bright the light is.
58\endlist
59
60The longer a conversion is allowed to take, the better the resulting image
61quality. Using a flash can assist with letting more light hit the sensor,
62allowing it to convert pixels faster, giving better quality for the same
63amount of time. Conversely, allowing a longer conversion time can let you
64take photos in darker environments, \b{as long as the camera is steady}. If the
65camera moves while the sensor is recording, the resulting image is blurred.
66
67\section2 Image Processing
68After the image has been captured by the sensor, the camera firmware performs
69various image processing tasks on it to compensate for various sensor
70characteristics, current lighting, and desired image properties. Faster sensor
71pixel conversion times may introduce digital noise, so some amount of image
72processing can be done to remove this, based on the camera sensor settings.
73
74The color of the image can also be adjusted at this stage to compensate for
75different light sources - fluorescent lights and sunlight give very different
76appearances to the same object, so the image can be adjusted based on the
77white balance of the picture (due to the different color temperatures of the
78light sources).
79\image image_processing.png "5 examples of various image processing techniques."
80
81Some forms of "special effects" can also be performed at this stage. Black
82and white, sepia, or "negative" style images can be produced.
83
84\section2 Recording for Posterity
85Finally, once a perfectly focused, exposed and processed image has been
86created, it can be put to good use. Camera images can be further processed
87by application code (for example, to detect bar-codes, or to stitch together a
88panoramic image), or saved to a common format like JPEG, or used to create a movie.
89Many of these tasks have classes to assist them.
90
91\target camera-tldr
92\section1 Camera Implementation Details
93\section2 Detecting and Selecting a Camera
94
95Before using the camera APIs, you should check that a camera is available at
96runtime. If there is none available, you could disable camera related features
97in your application. To perform this check in C++, use the
98\l QMediaDevices::videoInputs() function, as shown in the example below:
99
100
101 \snippet multimedia-snippets/camera.cpp Camera overview check
102
103Access a camera using the \l QCamera class in C++ or the \l Camera
104type in QML.
105
106When multiple cameras are available, you can specify which one to use.
107
108In C++:
109
110 \snippet multimedia-snippets/camera.cpp Camera selection
111
112In QML, you can select the camera by setting the \l{Camera::cameraDevice} property.
113You can also select a camera by its physical position on the system rather than
114by camera info. This is useful on mobile devices, which often have a
115front-facing and a back-facing camera.
116
117In C++:
118
119 \snippet multimedia-snippets/camera.cpp Camera overview position
120
121
122In QML, you can set the \c Camera \l{Camera::cameraDevice}{cameraDevice} property.
123Available cameras can be retrieved with MediaDevices.videoInputs
124
125In QML:
126
127\qml
128Camera {
129 position: Camera.FrontFace
130}
131\endqml
132
133
134If both QCameraDevice and position aren't specified, the default camera
135will be used. On desktop platforms, the default camera is set by the
136user in the system settings. On a mobile device, the back-facing camera
137is usually the default camera. You can get the default camera with
138QMediaDevices::defaultVideoInput() or MediaDevices.defaultVideoInput
139in QML.
140
141\section2 Preview
142
143While not strictly necessary, it's often useful to be able to see
144what the camera is pointing at. This is known as a preview.
145
146Depending on whether you're using QML or C++, you can do this in multiple ways.
147In QML, you can use \l Camera and videoOutput together to monitor a
148captureSession.
149
150\qml
151Item {
152 VideoOutput {
153 id: output
154 anchors.fill: parent
155 }
156 CaptureSession {
157 videoOutput: output
158
159 Camera {
160 // You can adjust various settings in here
161 }
162 }
163}
164\endqml
165
166In C++, your choice depends on whether you are using widgets, or QGraphicsView.
167The \l QVideoWidget class is used in the widgets case, and \l QGraphicsVideoItem
168is useful for QGraphicsView.
169
170 \snippet multimedia-snippets/camera.cpp Camera overview viewfinder
171
172For advanced usage (like processing preview frames as they come, which enables
173detection of objects or patterns), you can also use your own QVideoSink and set
174that as the videoOutput for the QMediaCaptureSession. In this case, you will need to
175render the preview image yourself by processing the data received from the
176videoFrameChanged() signal.
177
178 \snippet multimedia-snippets/camera.cpp Camera overview surface
179
180On mobile devices, the preview image is by default oriented in the same way as the device.
181Thus, as the user rotates the device, the preview image will switch between portrait and
182landscape mode. Once you start recording, the orientation will be locked. To avoid a poor
183user experience, you should also lock the orientation of the applications user interface
184while recording. This can be achieved using the
185\l{QWindow::contentOrientation}{contentOrientation} property of QWindow.
186
187\section2 Still Images
188
189After setting up a viewfinder and finding something photogenic, to capture an
190image we need to initialize a new QImageCapture object. All that is then
191needed is to start the camera and capture the image.
192
193 \snippet multimedia-snippets/camera.cpp Camera overview capture
194
195\section2 Movies
196
197Previously we saw code that allowed the capture of a still image. Recording
198video requires the use of a \l QMediaRecorder object.
199
200To record video we need to create a camera object as before but this time as
201well as creating a viewfinder, we will also initialize a media recorder object.
202
203 \snippet multimedia-snippets/camera.cpp Camera overview movie
204
205Signals from the \e QMediaRecorder can be connected to slots to react to
206changes in the state of the encoding process or error events. Recording
207starts when \l QMediaRecorder::record() is called. This causes the signal
208\l{QMediaRecorder::}{recorderStateChanged()} to be emitted. Recording is
209controlled by the record(), stop(), and pause() slots of QMediaRecorder.
210
211\section2 Controlling the Imaging Pipeline
212
213Now that the basics of capturing images and movies are covered, there are a number
214of ways to control the imaging pipeline to implement some interesting techniques.
215As explained earlier, several physical and electronic elements combine to determine
216the final images, and you can control them with different classes.
217
218\section3 Focus and Zoom
219
220QCamera allows you to set the general focus policy by means of the
221enums for the \l {QCamera::FocusMode}{FocusMode}. \l {QCamera::FocusMode}{FocusMode}
222deals with settings such as \l {QCamera::FocusModeAuto},
223and \l {QCamera::FocusModeInfinity}.
224
225For camera hardware that supports it, \l QCamera::FocusModeAutoNear allows
226imaging of things that are close to the sensor. This is useful in applications
227like bar-code recognition, or business card scanning.
228
229In addition to focus, QCamera allows you to control any available zoom
230functionality using \l{QCamera::setZoomFactor}{setZoomFactor()} or
231\l{QCamera::zoomTo}{zoomTo()}. The
232available zoom range might be limited or entirely fixed to unity (1:1). The
233allowed range can be checked with \l{QCamera::minimumZoomFactor}{minimumZoomFactor()}
234and \l{QCamera::maximumZoomFactor}{maximumZoomFactor()}.
235
236\section3 Exposure, Shutter Speed and Flash
237
238There are a number of settings that affect the amount of light that hits the
239camera sensor, and hence the quality of the resulting image.
240
241The main settings for automatic image taking are the
242\l {QCamera::ExposureMode}{exposure mode} and \l {QCamera::FlashMode}{flash mode}.
243Several other settings (such as: ISO setting and exposure time) are usually
244managed automatically, but can also be overridden if desired.
245
246Finally, you can control the flash hardware (if present) using this class. In
247some cases the hardware may also double as a torch.
248
249\target camera_image_processing
250\section3 Image Processing
251
252The QCamera class lets you adjust the image processing part of the pipeline.
253These settings include:
254\list
255 \li \l {QCamera::WhiteBalanceMode}{white balance}
256 (also known as color temperature)
257\endlist
258
259Most cameras support automatic settings for all of these, so you shouldn't need
260to adjust them unless the user wants a specific setting.
261
262\section3 Canceling Asynchronous Operations
263
264Various operations, such as image capture and auto focusing, occur asynchronously.
265These operations can often be canceled by the start of a new operation, as long
266as this is supported by the camera.
267
268\section1 Examples
269
270There are both C++ and QML examples available.
271
272\section2 C++ Examples
273
274\annotatedlist camera_examples
275
276\section2 QML Examples
277
278\annotatedlist camera_examples_qml
279
280\section1 Reference Documentation
281
282\section2 C++ Classes
283
284\annotatedlist multimedia_camera
285
286\section2 QML Types
287
288\annotatedlist camera_qml
289
290*/