Organisation internationale de normalisation



Yüklə 2,06 Mb.
səhifə59/65
tarix02.01.2022
ölçüsü2,06 Mb.
#23672
1   ...   55   56   57   58   59   60   61   62   ...   65
#

Title

Source

Disposition




37576

Liaison statement

JTC 1/ WG 10

IoT

3

37581

JTC 1/WG 10 on Invitation to the 4th JTC 1/WG 10 Meeting

JTC 1/ WG 10

Invitation to the 4th JTC 1/WG 10 Meeting, 18th-22nd Jan. 2016

3

37582

JTC 1/WG 10 on Logistical information for the 4th JTC 1/WG 10 Meeting

JTC 1/ WG 10




3

37583

Liaison statement

ITU-T SG 20

a new Study Group, IoT and its applications including smart cities and communities (SC&C)

3

37584

Liaison statement

JTC 1/ WG 10

collection of Data related to the Internet of Things

3

38041

Revised WD of ISO/IEC 30141

JTC 1/ WG 10

Internet of Things Reference Architecture (IoT RA)

3

38042

Request contributions on IoT use cases

JTC 1/ WG 10




3

3DG




m37828

Efficient color data compression methods for PCC

Li Cui, Haiyan Xu, Seung-ho Lee, Marius Preda, Christian Tulvan, Euee S. Jang

m38136

Point Cloud Codec for Tele-immersive Video

Rufael Mekuria (CWI), Kees Blom (CWI), Pablo Cesar (CWI)

m37528

[FTV AHG] Further results on scene reconstruction with hybrid SPLASH 3D models
Improvements on the SPLASH reconstruction from image + depth. Point cloud can be the base representation.

Sergio García Lobo, Pablo Carballeira López, Francisco Morán Burgos

m37934

Web3D Coding for large objects with attributes and texture
Summary: update for the Web3D ref software (support for high resolution textures, better navigation)

Christian Tulvan, Euee S. Jang, Marius Preda

Text copied from the last version of the report in order to guideline the work on Wearable




  • Glasses (see-through and see-closed)

    • Gesture recognition (performed by image analysis)

      • Add new gestures (update the classification schema)

        • MPEG-U could be the starting point

      • Consider an intermediate representation format for arbitrary hand(s) gestures

        • MPEG-7 ShapeDescriptor (Contour and Region) can be the starting point

      • Adaptation of the recognition process with respect to the user

        • MPEG-UD could be the starting point

    • Voice recognition (check with Audio)

      • Recognizing "some" voice commands

        • Pre-encoded keywords: play, pause, left, right,

        • User configured set of "words"

      • Able to transmit the captured audio to a processing unit

        • Probably encoded

        • Probably sound features

      • Interpret ambient sounds

    • Image analysis (other than gesture)

      • Recognizing "some" objects (e.g. Traffic signaling)

        • CDVS and probably CDVA

      • Face recognition

        • MPEG-7 (FaceRecognition Desc, AdvancedFaceRecognition Desc and InfraredFaceRecognition Desc)

      • Text recognition

        • Image to string – not too much to standardize however the in/out API should be defined

      • Able to transmit the captured image/video to a processing unit

        • Probably encoded : MPEG-4 video and JPEG

        • Probably image or video features (CDVS and CDVA)

      • Able to convert in real time the captured image into another image (that will be eventually displayed)

        • Input: image, output: image– not too much to standardize however the in/out API should be defined

    • User interface

      • Sensors

        • MPEG-V could be the starting point

          • Gyro, Accelerometer, Camera (color, stereoscopic, infrared, …), microphone, touch sensitive device, gaze tracker (?!)

      • Display capabilities (Single, double screen, Stereoscopic)

        • MPEG-V has the possibility to define actuator capabilities but is not dealing yet with displays

      • Rendering

        • Rendering is controlled by the application

        • The glass should expose rendering capabilities (HW acceleration, speed, extensions…)

      • Consider the user profile for adapting glass features

        • MPEG-UD

    • Control and management

      • Define level of quality for individual components in the glass system

        • Example: voice recognition may have level 100 and object recognition only 30: in case of lack of resources, voice recognition will have priority

      • Exposing the hardware capabilities (CPU, GPU, memory, storage, battery level, …)

      • Define level of quality for individual applications (to be moved outside the glass section)

    • Informative Part

    • Communication and interconnection between glasses and external devices

      • Messages, commands

        • Eg. http, udp

      • Media (image, video, audio)

        • Eg. DASH



Wearable we want to deal with in MPEG



  • Glasses

  • Watches (Miran Choi). AP: to do the same exercise that was done for glasses.

  • Earphone (Miran Choi). AP: to do the same exercise that was done for glasses.

  • Artificial heart (Mihai Mitrea). AP: to do the same exercise that was done for glasses.

  • D-Shirt (Mihai Mitrea). AP: to do the same exercise that was done for glasses.



Yüklə 2,06 Mb.

Dostları ilə paylaş:
1   ...   55   56   57   58   59   60   61   62   ...   65




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin