*.png

This is a collection of 2D and 3D images used for grayscale image processing tests. It includes at least 8 images of each of the following sizes:

Categories:
2480 Views

Accurate and efficient anomaly detection is a key enabler for the cognitive management of optical networks, but traditional anomaly detection algorithms are computationally complex and do not scale well with the amount of monitoring data. Therefore, this dataset enables research on new optical spectrum anomaly detection schemes that exploit computer vision and deep unsupervised learning to perform optical network monitoring relying only on constellation diagrams of received signals.

Categories:
936 Views

Given the difficulty to handle planetary data we provide downloadable files in PNG format from the missions Chang'E-3 and Chang'E-4. In addition to a set of scripts to do the conversion given a different PDS4 Dataset.

Categories:
370 Views

Existing datasets  for reflection symmetry detection contain shapes which are single contour shapes, thus they are not really challenging. We also need to consider how well a symmetry detector works on complex/compound shapes where traditional methods based on contour approach can not. On the other hand, there is only one symmetrical axes for every shape of this dataset. Therefore, this fails to evaluate how a symmetry detector works on a shape containing several symmetrical axis and how good the detection is when the number of symmetrical axis is unknown.

Categories:
118 Views

Our complex street scene(CSS) containing strong light and heavy shadow scenes mainly comes from the Kitti dataset.  Our datasets are captured by driving around Karlsruhe's mid-size city, in rural areas, and on highways. We equipped a standard station wagon with two high-resolution color and grayscale video cameras. Up to 15 cars and 30 pedestrians are visible per image. We aim to verify the performance of the algorithm in specific and complex street scenes.

Categories:
184 Views

The color fractal images with correlated RGB color components were generated using the midpoint displacement alogrithm, using vectorial increments in the RGB color space, according to a multivariate Gaussian distribution specified by the variance-covariance matrix. This data set contains two sets of 25 color fractal images with two color components, of varying complexity expressed as the color fractal dimension, as a function of (i) the Hurst coefficient that was varied from 0.1 to 0.9 in steps of 0.2 and (ii) the correlation coefficient between the red and green color channels.

Categories:
250 Views

We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets c

Categories:
1368 Views

This dataset is for light field image augmentaion. The dataset contains 100 pairs of light field image, each of them consists of "original" and "modified". "Original" is light field image with only background, "modified" is light field image with exactly same background and an object on it.

Categories:
198 Views

To improve reproductivity of our papar, we would upload experimental data and resources of evaluations.

Categories:
167 Views

Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task.

Categories:
4216 Views

Pages