Skip to content

POIR.04.01.04-00-0078/20

Development of smart swarm reconfigurable drone technology and its verification on the example of drone demonstration and photovoltaic farm inspection

The subject of the project is the technology of an intelligent drone swarm, characterized by scalability, reliability and resilience, as well as autonomy and the ability to realize various spatial drone configurations. The drone swarm simulator structured below is based on the Microsoft AirSim environment and the open source PX4 flight controller software in the Software In The Loop (SITL) concept.

Learn more about the project

The created mathematical model of the drone accurately represents the physical drone because the moments of inertia of the model are determined from the CAD model and stored in the configuration file. The environment provides the ability to simulate with all target protocols and communication methods like Mavlink and simulated external signal sources like GNSS. The results of the simulation studies are the basis of testing for a swarm of physical drones. The smart drone swarm technology will be verified in real conditions by using the drone swarm for an interactive drone swarm show and for inspecting photovoltaic farms for possible damage. For the inspection of large photovoltaic farms, the use of a swarm will significantly reduce inspection time. The development of the aforementioned elements will take place within the framework of conducting industrial research in the first two stages of the project and four stages of development work. The project is being carried out in a consortium with an academic unit.


WND-RPSL.01.02.00-24-00AC/19-010

Innovative system for identification and re-identification of persons based on facial images captured in a short video sequence to enhance security at mass events

The subject of the project is:

  1. conducting research work,
  2. Development of technology and,
  3. Implementing a pilot solution for an intelligent video surveillance and analytics system to enhance the security of mass events held at sports venues, stadiums, indoor arenas.

The research work included the development of algorithms in the fields of face detection in images, normalization of face orientation, normalization of face illumination, detection of anthropometrically significant feature points, dimensionality reduction, explicit and implicit metrics of biometric feature representations, classification and effectiveness statistics. Such an extensive study made it possible to determine the dependence of effectiveness statistics on the technological solutions used and variants thereof, as well as on the specifics of the monitored environment, and the adaptation to the acquisition conditions both in terms of the environment and the behavior of the monitored people.

Learn more about the project

The developed technology relates to the re-identification of a person based on a facial image in a short video sequence acquired at an entry gate and/or acquired from a high-resolution camera or PTZ camera. In the case of the technology envisaged to be produced under the project, its innovation is contained in the component of intelligent analytics implemented using advanced computer vision algorithms, machine learning and deep neural networks. The use in the analysis process of a short fragment of a video recording, on which there is an image of the face of a re-identified person, makes it possible to compensate for head orientation, facial expressions, obscuration, and, due to the person's movement (natural or provoked), it also makes it possible to reconstruct 3D information regarding the entire face or only its points. Such a solution represents a national innovation in both process and product terms. The Research Center of the Polish-Japanese Academy of Information Technology has competencies in the field of video surveillance systems acquired in the results of two NCBiR-funded projects: Application of video surveillance systems for identification of behaviors and persons and detection of dangerous situations using biometric techniques and inference of characters in 3D from video, and System of intelligent video analysis for recognition of behaviors and situations in surveillance networks.

The project is in line with the objectives of the WSL ROP because:
- influences increased marketization of research and development activity,
- influences increased research and development activity of enterprises,
- influences economic growth of the Silesian Voivodeship,
- concerns implementation of a technology that is innovative on a national scale,
- leads to increased competitiveness of enterprises on the Polish market,
- contributes to an increase in employment, resulting in a decrease in unemployment in the Silesian Voivodeship,
- fits into technological area 4 Information and Telecommunications Technologies within the Silesian Voivodeship Technology Development Program for 2010-2020.


POIR.01.02.00-00-0160/20

Innovative technology for creating multimedia events based on drone battles with synergy between virtual, augmented and physical levels

The result of the implemented project will be a demonstrator ( IX TRL) of a system of three-level synergistically linked drone combat events allowing for a comprehensive organization of innovative entertainment.

The technology being developed in the project focuses on the creation of an entirely new infrastructure combining a virtual environment with physical access to drones in a DaaS model. The technology will consist of 3 environments, collectively a precursor to a new entertainment use of drones in the consumer segment - drone fights. Currently, there is no organized and widely available formula on the market that allows them to be conducted on such a scale.

Learn more about the project

At the 1st level, the fights will take place in a virtual environment, in multiplayer mode.

At the 2nd level, the fights will be implemented in an augmented reality environment, where avatars of physical drones can meet
with virtual ones. In this version of the event, the viewer can also take an active part.

At the 3rd level, only physical drones are present and the fights take place in the space of a stadium, arena or airport and consist of teams fighting
according to set criteria. In this version of the event, the viewer has the same capabilities as in the virtual version except for calling
his virtual camera.

Project value: PLN 12,515,908.65
European Funds contribution: PLN 9,882,100.86


Pol-Nor/204256/16/2013

Automated Assessment of Joint Synovitis Activity from Medical Ultrasound and Power Examinations using Image Processing and Machine Learning Methods. MEDUSA

The project is implemented within the framework of the Polish-Norwegian Research Cooperation, Norwegian Financial Mechanism 2009-2014.

Project Manager: Dr. Adam Cupek
Supervisor: Prof. Konrad Wojciechowski, Ph.

The project is implemented by a consortium consisting of : Silesian University of Technology, Faculty of AEiI (Leader), Polish-Japanese Academy of Information Technology, Bytom Branch Department of Computer Science, ITAM Institute of Medical Technology and Apparatus, Helse Forde HF, Sogn og Fjordane University College,

Learn more about the project

The goal of the project is to create, using computer vision and machine learning techniques, a system to assess the degree of rheumatoid inflammation in the joints of the fingers of the hand from ultrasound images. The project is interdisciplinary in nature, as it involves medicine and computer science. As belonging to modern diagnostic tools in the field of health, it is financed by the Polish-Norwegian fund. The quality of the classification of the degree of inflammation realized by the system was verified by medical specialists on both test data and current data. In the created system, the assessment of the degree of inflammation was realized in stages. In the first stage, areas of skin and bone were found in the ultrasound image. Based on this, the area of the joint was identified, in which the area of inflammation was determined using the segmentation technique. The degree of inflammation was determined based on the geometric features of the determined area. The basis for teaching the system in all the above-mentioned phases was a set of images with areas of bone, skin and areas of inflammation marked manually by the doctor.


POIG.02.03.00-24-149/13

Interdisciplinary laboratories for analysis and synthesis of movement in the shareconomy formula.

Project co-financed by the European Regional Development Fund under the Innovative Economy Operational Program, European Funds - for the development of innovative economy. We invest in your future.

The aim of the project was to expand including upgrading the already existing laboratories such as the Multi-modal Motion Lab HML (Human Motion Lab), Microexpression Lab HMX (Human Microexpression Lab) Vision Analysis of Motion Lab HSL (Human Seeing Lab) Computer Facial Modeling Lab HFML (Human Facial Modeling Lab) and to create new ones in the area of competence of CBR PJATK Bytom. The guiding principle of the project was to ensure that the laboratories could be shared with directly cooperating units (letters of intent) and potentially interested parties.

Learn more about the project

As part of the development of laboratories already in place, the HML laboratory was retrofitted with 10 new-generation cameras to a total of 20 NIR cameras, which will improve and at the same time speed up the process of
motion acquisition and data pre-processing. A network of motion acquisition points has been established for HSL. This enables the acquisition of large training sets for behavioral pattern classification as well as algorithm testing. The software supporting the labs was updated by refactoring the existing implementation and extending its architecture and functionality. The refactoring consisted of tidying up the code in the service layer, as well as work related to optimizing the database schema and functionality aimed at speeding up search operations. In terms of architecture, extensions were introduced to allow the geographic dispersion of the HMDB dataset itself and its interoperability with local medical data repositories, separated and placed on the side of health units to protect sensitive data. With the introduction of the distributed architecture, mechanisms for synchronizing data with local copies on the client application side were also improved. The expansion of functionality included support for applying corrections and modifications to the collected data and their possible versioning. The functionality of users' private areas associated with their HMDB accounts has been developed. Mechanisms were introduced to extend the system with new specific analytical processes.


Strategmed 1/233221/3/NCBR/2014

Using medical data tele-transmission to improve the quality of life of heart failure patients and reduce the cost of their treatment. MONITEL-HF

The project has received funding from the National Center for Research and Development.

Project Manager: prof. Lech Poloński

The project is implemented by a Consortium consisting of : Silesian Center for Heart Diseases in Zabrze (Leader),Polish-Japanese Academy of Information Technology, Wasko S.A., ENTE sp. z o.o., Kardio-Med Silesia sp z o.o., American Heart of Poland S.A., ITAM Institute of Medical Technology and Apparatus, Novum S.A.

Learn more about the project

The aim of the tasks carried out by PJATK was to develop a concept and realize a test batch of 10 shirts with electrodes enabling the measurement of parameters defined in the project. As part of the task, an exhaustive review of wearable systems designed for the acquisition of human vital parameters was carried out. Three solutions were tested, which unfortunately did not meet the requirements for the system. On this basis, the production of the system's measurement layer was commissioned to ITAM. An exhaustive review of solutions and materials used in the electrodes of wearable systems was carried out. A proprietary electrode system integrated within the shirts with measuring modules was designed and manufactured.


UOD-DEM-1-183/001

Intelligent video analysis system for behavior and situation recognition in surveillance networks. Demonstrator +

Project co-financed by the European Regional Development Fund under the Operational Program Innovative Economy European Funds - for the development of an innovative economy We invest in your future.

The project was co-financed by the National Center for Research and Development.

Project Manager: Dr. Tomasz Czapla

The project is being implemented by a consortium consisting of: OBRUM Sp z o.o. ( Project Leader) PJATK (Project Partner), SilSense Technologies (Project Partner).

Learn more about the project

The goal of the project, with the acronym SAVA, is to develop and verify in real conditions a demonstrator of a prototype intelligent video analysis (IVA) system that will be able to recognize and classify in real conditions the behavior and actions of individuals and groups, and identify situations that require an alert. The project uses the innovative results of the Application of Video Surveillance Systems for Identification of Behavior and Individuals and Detection of Dangerous Situations Using Biometric Techniques and 3D Character Inference from Video The SAVA project transferred the behavior recognition technology and related: augmented motion object tracking technique, new motion representation, and learning methods, from the Application of Video Surveillance Systems for Identification of Behavior and Individuals and Detection of Dangerous Situations Using Biometric Techniques and 3D Character Inference from Video project to a usable prototype at the 9th technology readiness level. The SAVA prototype will operate in the cloud and will be more advanced than the existing research prototype and much faster, enabling recognition functions to be performed many times faster than real-time video, allowing multiple cameras to be operated simultaneously.


Innotech K2/IN2/50/182645/NCBR/12

New technologies for high-resolution facial expression acquisition and animation.

The project has received funding from the National Center for Research and Development.

Project Manager: Prof. Konrad Wojciechowski, Ph.D.

Learn more about the project

The goal of the project was to develop technology for transferring (retargeting) an actor's facial expressions onto a high-resolution facial mesh and photogrammetric technology for scanning 3D objects. Two alternative technologies were developed. In the first retargeting technology, the actor's facial expressions are acquired using a marker system in the second, a hardware synchronized video camera system is used to acquire them. The high-resolution facial mesh onto which the actor's facial expressions are transferred, different in the general case from the actor's facial mesh, is acquired once, statically, using a 3D scanner or a system of synchronized cameras implementing a photogrammetric scanning technology unique in the country. The technologies developed are in the form of software that supports and in some cases automates the work of animators of computer game characters. Allowing for intervention on the part of the animator, one of the developed

retargeting technology makes it possible to transfer the facial expressions recorded in the footage, that is, only from one camera, onto a 3D grid. This aspect of the technology is unique in the market of computer game development tools. The developed technology reduces dramatically (on the order of 10^-2) the animator's working time.


PBS 178438

Human motion acquisition costume based on IMU sensors with data collection, visualization and analysis software.

The project has received funding from the National Center for Research and Development.

Project Manager: Prof. Konrad Wojciechowski, Ph.D.

The project was implemented by a Consortium consisting of: Polish-Japanese Academy of Information Technology (Leader), Silesian University of Technology in Gliwice Faculty of AEI , Textile Institute in Lodz.

Learn more about the project

The practical goal of the project was to create a handy, cost-effective and easily profiled for various applications human motion acquisition system based on sewn-in, in a non-restrictive costume, energy-efficient miniature IMU modules (up to 50 pcs) connected by a single CAN bus to a hub and similarly powered by a single cable. The modular system software has basic functionalities such as traffic data collection their visualization, analysis and the ability to create with plug-ins new functionalities. The number of modules and their arrangement is determined according to the specific sentence. In the maximum version, it will enable motion acquisition of the entire figure. In the minimum version, it is possible to acquire the movement and analysis of a single joint. Acquired motion data are processed at the primary and secondary levels in each module or saved, as raw data and processed in an external computer with installed software supporting the entire system. The theoretical work completed as part of the project involved the modification, implementation and testing of algorithms for integrating data from a triaxial accelerometer, gyroscope and magnetometer.


UMO-2011/01/BST6/06988

Diagnosis of selected gait abnormalities based on multimodal motion analysis

Project supported by funds from the National Science Center

Project Manager: Adam Świtoński, Ph.

The goal of the project was to verify the usefulness of multimodal motion data in the diagnosis of selected diseases.

Learn more about the project

Within the framework of the project, work has been carried out on the development of a methodology using multimodal movement data, in an experimental setup containing synchronized kinematic data, ground reaction forces, dynamic electromyography (EMG) and video, enriched with appropriate computer processing and recognition algorithms for supporting the diagnosis of gait abnormalities. They focus on: a) quality assessment of the acquired data, b) processing of multimodal data to obtain features to support diagnosis of the considered conditions, c) development of algorithms for analysis and classification of multimodal movement patterns, d) statistical evaluation of the relationships between multimodal data features and the clinical condition of the patient for the considered gait abnormalities.


4757/B/T02/2011/40

Reducing the dimensionality of the time series of poses and discovering the manifold to which they belong for clustering, classification and visualization of motion.

Project supported by funds from the National Science Center

Project Manager: prof. dr. Andrzej Polanski

The overall theme of the project was research into the dimensionality reduction of data representing character configuration/position and human character motion. The dimensionality reduction research is driven by two factors: (I) the pursuit of data reduced to 3D or 2D which allows for visualization and perception, (II) simplification and enhancement of classification efficiency. The classification aspect is predominant in the area of pose and motion data reduction hence a lot of attention was paid to it in the implementation of the project. The scientific objectives of the project were to test the modification and improvement of algorithms for the reduction of dimensionality of poses and time series of poses, to develop a multi-resolution representation of motion using a lifting scheme, and to apply the obtained results in the clustering of classification and visualization of motion.


2892/B/T02/2011/40

Development of quantitative measures of movement, rationalizing, on the basis of multimodal movement measurement, the subjective UPDRS criteria to improve diagnosis before and after DBS pacemaker implantation in patients with Parkinson's disease.

Project supported by funds from the National Science Center

Project Manager: prof. dr. Andrzej Polanski

The goal of the project was to develop measures of movement to objectify the diagnosis of patients with Parkinson's disease including rationalization of drug and stimulation therapy. Multimodal movement task performance data were collected for 20 patients with Parkinson's disease and an implanted DBS stimulator. Patients in the MedON, MedOFF, StimON, and StimOFF states implemented movement sentences developed by specialists.

Learn more about the project

Movement task completion scores given by a neurologist on the UPDRS scale were correlated with various movement indices. The following indices were studied: the ASA (Arm Swing Asymmetry), ASSS (Arm Swing Size Symmetry), DI (Decomposition Index), SLA (Stride Length Asymmetry), T (Tremor), and FoG (Freezing of Gait). The DI index is shown to have the highest correlation coefficient with UPDRS form scores. The result of the project is a database of multimodal movement data of patients with Parkinson's disease. The data contained in the database can provide a basis for continuing research on the movement of patients with Parkinson's base disease. The results obtained can also be the basis for automatic dopamine dosing and selection of pacemaker settings in DBS patients.


OR 00002111

Use of video surveillance systems to identify behaviors and individuals and detect dangerous situations using biometric techniques and 3D character inference from video.

The project has received funding from the National Center for Research and Development.

Project Manager: prof. dr. Andrzej Polanski

The aim of the project was to develop a lab-scale video surveillance system of IVA (Intelligent Video Analytics) The work focused on the most difficult functionalities to achieve in video surveillance systems, such as: i)recognizing a person based on gait, ii)recognizing the behavior of individual persons such as, for example, walking, running, falling, jumping, ii)recognizing interactions between persons and groups of persons. Algorithms implementing the required functionalities were created in two streams. In the first, the input data was video data directly. In the second current, video data from one or two cameras formed the basis from inferring a 3D character whose spatial configuration was only the basis for behavior classification. Obtaining the intended functionality required the development of a number of basic tools among them the annotation system used in creating the classifier's learning sample. The basis for all the work as well as the tests were recordings from four HD PTZ cameras placed in the Bytom market. For the purposes of the project, a database was created with about 3000 hours of recordings, of which annotated approx. 3500 events. This database can form the basis for further work in the field of video surveillance systems.


N N518 427236

Testing and development of classification algorithms based on hyperspectral images in photodynamic and fundus diagnostics.

The project was supported by funds from the Ministry of Science and Higher Education.

Project Manager: Adam Świtoński, Ph.

The aim of the project was to study the possibilities and benefits of using multispectral and hyperspectral imaging in ophthalmic diagnostics. The work carried out under the project was focused on several complementary aspects such as: i)development of IT tools to support scientific work including collection, processing and analysis of multispectral images, ii)construction of a device for multispectral imaging of ocular structures and carrying out acquisition under clinical conditions, iii)scientific and research work related to testing and development of methods for classification and processing of multispectral and hyperspectral images. The research was conducted in cooperation with the Ophthalmology Clinic of the Medical University of Wroclaw multispectral acquisition of ocular structures was carried out. The following disease entities were selected: posterior and anterior uveitis, cataract, glaucoma and diabetic retinopathy. In total, covering photodynamic and ophthalmic diagnosis, the accumulated database of multispectral images includes 109 images.


UDA-POIG.01.03.01-14-138/08-02

System with a library of modules for advanced analysis and interactive synthesis of the human figure.

Project co-financed by the European Regional Development Fund under the Operational Programme Innovative Economy 2007-2013, Measure 1.3, Sub-measure 1.3.1

Project Manager: Prof. Konrad Wojciechowski, Ph.D.

The immediate goals of the project were to introduce innovative services in the field of orthopedics, lay the groundwork for further research in the field of motion biometrics as well as increase the competitiveness of gaming and advertising companies. One of the important elements of the project was the establishment of a high-tech laboratory for multi-modal motion measurement. The system includes independent modules for cinematography, electromyography, dynamometry, video, as well as modules for recording and backing up data from individual modules. Data from different modules are synchronized with respect to the kinematics module.


R13 046 02

Information system for optical tissue imaging and diagnostic and prognostic support in selected cancers.

The project was supported by funds from the Ministry of Science and Higher Education.

Project Manager: Prof. Konrad Wojciechowski, Ph.D.

The aim of the project was to develop a computer system for optical multispectral imaging of tissues by photodynamic diagnostics based on analysis of the fluorescence spectrum after systemic or local administration of a photosensitizer. Multispectral imaging, combined with optimal configuration and parameterization of image processing modules, enables diagnostic and prognostic support in selected cancers including early detection of pre-cancerous conditions, more accurate determination of the boundary between healthy and pathological tissues, shortening the time for treatment decisions, evaluation of treatment efficacy, non-invasive control and follow-up of patients after treatment, early localization of recurrence after treatment.