The goal of this AI service is to automate the annotation of materials contained in BIM models using images of the real object and its environment. Using suitable training datasets, a Convolutional Neural Network is to be used for the recognition and segmentation of different materials contained in 2D images. 360°-degree images are used, which are also automatically registered in terms of their rotation and translation in the IFC model (using SLAM algorithms or RTK GPS system), so that the detected materials can be directly assigned to the components and objects in the model. In order to use the annotated models for acoustic propagation simulations, their acoustic properties (especially absorption and reflection coefficients) are annotated. The acoustic data can optionally be taken from own measurements as well as from existing databases (e.g. Absorption Coefficient Database ).
This service is used for the automatic annotation of materials and optional material properties in BIM models. For this purpose, 360°-degree images can be taken of the real environment of the model. These are examined by the service for the materials they contain and segmented accordingly into different image areas. In addition, the images are automatically registered to the BIM model so that the materials can be automatically assigned to the components contained in the model. By linking them to acoustic absorption coefficients of the respective materials, the models can thus be used, for example, for sound simulations.
- Quality: Images of the Omnicam-360 developed at the HHI are used, since it outputs parallax-free images (resolution: approx. 10000 x 2000). In comparison, commercially available 360° cameras, such as the GoPro MAX, are considered. For the time being, the training is based on synthetic data.
- Input data format (e.g. dxf/pdf/doc/jpg/xls):
- 2D images: .tiff, .png
- 3D models: .ifc
- Training data available?:
- Output data format (e.g.):
- .ifc file
Christoph Ende, Fraunhofer Heinrich-Hertz-Institut (HHI)