Connect your manual annotation tool to our API and reduce drastically the team work load. Complement your manual annotation with automatically generated bounding boxes, segmentation and key points.
Use our fully customizable manual editor integrated with our annotation API to manage your dataset, get all our automated annotation services and be able to check and review the results.
-
-
Our system can recognize thousands of objects from the world around us. If we do not have a pre-trained model for your object we can still train our system to recognize it with either a few examples or with some initial manual data.
Our system already understands many of the areas that appear in pictures and can segment them automatically, but if it is not the case for some areas you can always take advantage of our autoML system and train a model for it as you segment the images
We have hundreds of pre-trained models for detailed instance segmentation. You can also use our automated ground truth extraction which will learn from your selections and will be able to recognize objects and segment them without any training.
As mentioned on previous techniques our system only needs a few examples of the object you are looking for to start providing autonomous bounding boxing
Our system finds automatically the optimal orientation of the bounding boxes to fit the object.
The platform has pre-trained models for lines and road signs recognition as well as for landmarks like curbs. We also have algorithms that can do segmentation of areas that are similar and texture recognition so even for open amorphous objects our system makes extracting those areas an easy task.
The platform has several output formats. Rendering and returning masks instead of polygons is one of them. Contact us and we will make sure we fit your annotation file format.
Our set of pre-trained models includes key point annotation for bodies, faces and other objects key points. Coming soon an algorithm that will be able to recognize subparts of an object.
As you can see our model for text recognition and extraction works even with the most difficult images.
Our system is capable of automatically breaking a video up into frames so that we can then apply all the techniques that we mentioned before in image processing.
Our composable pipeline allows us to combine different models into a final complex dataset. In the case of videos, we just add a tracker and a re-identification module so that we can follow the same object throughout the film.
Our composable pipeline allows us to do all sorts of combinations between the outputs of different deep learning models and classical models. In this case we are combining a model that detects people and one that combines bikes and delivering the intersection.
In this case the result is obtained by excluding the items where people and bikes are together.
Our system can also do more complex things like interpreting when an object is in a certain context. For example here the two people that are boxed are the result of combining road recognition and people recognition.
Add a footnote if this applies to your business
One of our services consists of batch analysis. This service at the beginning will use pre-trained models for specific instances but instead of doing it on image by image basis, it will process a complete batch of images in one go. We would expect a zip file with either individual images or video. On the video files we will add tracking and re-ID of the instances. We will also do text extraction from images and videos as part of the services we provide for batch datasets.
In this case our API still allows you to get the results through a REST call. So you can either integrate it into your editor or manage and display the results in our in house editor.
For batches where we have no pre-trained models we will use our Machine in the Loop pipeline where all manual aspects of a deep learning training pipeline have been substituted by algorithms. For this besides the raw data we will need examples of the objects you want to annotate. If you do not have the tools to create these examples you can either use our in-house editor or we can obtain them with our image search engine.
In the batch processing projects, since the client is not reviewing the results in parallel, we provide two datasets, one that we are certain of the achieved accuracy and another dataset to be revised to make sure it fulfills the quality requirements.
Our manual editor based on label studio is very versatile and easy to use. It doubles as a dataset review manager and we have modified it so that it can directly call all the automated services from our API. It is ideal for companies that do not have a specific software for this task.
Our editor also allows for model composition. You will be able to narrow down the dataset by combining seamlessly different models that will allow you to create complex classes, like men only wearing black uniforms or smiling children instead of all the children in the picture.
One of our services consists of batch analysis. This service at the beginning will use pre-trained models for specific instances but instead of doing it on image by image basis, it will process a complete batch of images in one go. We would expect a zip file with either individual images or video. On the video files we will add tracking and re-ID of the instances. We will also do text extraction from images and videos as part of the services we provide for batch datasets.
In this case our API still allows you to get the results through a REST call. So you can either integrate it into your editor or manage and display the results in our in house editor.
For batches where we have no pre-trained models we will use our Machine in the Loop pipeline where all manual aspects of a deep learning training pipeline have been substituted by algorithms. For this besides the raw data we will need examples of the objects you want to annotate. If you do not have the tools to create these examples you can either use our in-house editor or we can obtain them with our image search engine.
In the batch processing projects, since the client is not reviewing the results in parallel, we provide two datasets, one that we are certain of the achieved accuracy and another dataset to be revised to make sure it fulfills the quality requirements.
All rights reserved to Clear Image AI S.à.r.l. - 2023 Terms of use