AILabs builds intelligence behind Taipei Traffic Density Network

Taipei is a city with millions of cars and motorcycles. Heavy traffic congestion occurs on the streets daily. Incidents have serious impact to traffic. Taipei have built tens of thousands citywide cameras recording real-time traffic videos. Police officers will consume the result to remove the traffic congestion. However, existing method requires human effort, only 16% of the incidents are manually detected.

Unlike cities in Mainland China, 2 major constraints exist on building the smart city systems in Taiwan. Humanity with Privacy and Integrity is the top priority. Taiwan is highly sensitive on human rights. The policy making need  to ensure the goodwill with integrity. The protection of privacy and prevention of the future abuse and misuse together are the top priority on building the system. Taipei used videos with lower resolution that can’t further recognize residents’ identity in the video. Second, the solution need to be green and environmentally friendly. Forcing the city government to retire the old cameras is challenging. At the same time, Taipei city used low-frame-rate cameras to reduce the energy consumption. With these 2 constraints, Taiwan AILabs collaborated with Taipei City Government to build the autonomous traffic detection and prediction system.

Existing Traffic Cameras (CCTV) of Taipei City

The videos taken by the existing traffic cameras are low-resolution and low-frame-rate. We design a method with robust network to detect and predict the traffic congestion in real-time. We presented Taipei Traffic Density Network (TTDN). The network is able to precisely detect vehicle density of defined region in real-time. TTDN is a fully convolutional networks. In TTDN, we acquire multi-scale features from the images of traffic videos and obtain the estimation of vehicle density after pixel-wise regressions.

 

The Outline of Taipei Traffic Density Network (TTDN)

We would like to remark that our approach is intertwined with our key ingredients of smart city, protection of privacy and conservation of energy. Since the traffic videos are low-resolution, it’s hard to perform any face recognition or plate recognition. This ensure the protection of privacy. Meanwhile, on top of reusing existing traffic cameras, the data with low-resolution and low-frame-rate also reduce storage space and the computational cost.

Using the mathematical assumption of vehicle density map, we can efficiently detect and predict traffic congestion. Besides using density map directly, some existing works show that the density map is also a strong feature map. The fusion of density map and some other features may be able to detect accident, road construction, etc. This is an interesting topic we are currently working on.


The 2018 Smart City Summit & Expo (SCSE)

Feautured Photo by  highwaysagency / CC BY 2.0

, ,

Humanity with Privacy and Integrity is Taiwan AI Mindset

The 2018 Smart City Summit & Expo (SCSE) along with three sub-expos have taken place at Taipei Nangang Exhibition Center on March 27th with 210 exhibitors from around the world this year, exhibiting a diversity of innovative applications and solutions in building a smart city. Taiwan is known for the friendly and healthy business environment, ranked as 11th by World Bank. With 40+ years in ICT manufacturing and top level embedded systems, companies form a vigorous ecosystem in Taiwan. With an openness toward innovation, 17 out of 22 Taiwan cities have made it to the top in Intelligent Community Forum (ICF).

Ethan Tu, Taiwan AILabs Founder, gave a talk of “AI in Smart Society for City Governance” and laid out AI position in Taiwan that smart cities is for “humanity with privacy and integrity” besides “safety and convenience”. He said “AI in Taiwan is for humanity. Privacy and integrity will also be protected.”. The maturity of crowd participation, transparency and open data mindset are the key assets to drive Taiwan on smart cities to deliver humanity with privacy and integrity. Taiwan AILabs took social participating and AI collaborated editing open-source news site of http://news.ptt.cc as an example. The city governments are now consuming the news to detect the social events happening in Taiwan in real-time for the AI news’ robustness and reliability in scale. AILabs collaborated with Tainan city on AI drone project to simulate “Beyond Beauty” director Chi Po-lin who dies in helicopter crash. AILabs also established “Taipei Traffic Density Network (TTDN)” supporting real-time traffic detection and prediction with citizen’s privacy secured, no people nor car can be identified without necessity for Taipei city.

The Global Solutions (GS) Taipei Workshop 2018 with “Shaping the Future of an Inclusive Digital Society” took place at the Ambassador Hotel on March 28, 2018 in Taipei. It is co-organized by Chung-Hua Institute for Economic Research (CIER) and the Kiel Institute for the World Economy. The “Using Big Data to Support Economic and Societal Development” panel section was hosted by Dennis Görlich Head, Global Challenges Center, Kiel Institute for the World Economy. Chien-Chih Liu, Founder of the Asia IoT Alliance (AIOTA), Thomas Losse-Müller, Senior Fellow at the Hertie School of Governance, Reuben Ng, Assistant Professor, and Lee Kuan Yew School of Public Policy, National University of Singapore all participated in the discussion. Big data has been identified as oil for AI and economic growth. He shared the vision in his panel, “We don’t have to sacrifice for safety or convenience. On the other hand, Facebook movement is a good example that the tech giants who overlook privacy and integrity will be dumped.”

Ethan explained 3 key principles from Taiwan societies on big data collection. The following principles exist and are contributed by the mature open internet societies and movements in Taiwan. AILabs will promote them as fundamental guidances for data collection on medical records, government records, open communities and so on.

1. Data produced by users belongs to users. The policy makers shall ensure no solo authority such as social media platform is too dominant to user and force users on giving up data ownership.

2. Data collected by public agent belongs to public. The policy makers shall ensure the data collected by public agency shall provide the roadmap on opening data for general public on researches. g0v.tw for example is a NPO for the open data movement.

3. “Net Neutrality” is not only ISP but also for social media and content hosting service. Ptt.cc for example, persists in equality of voice without Ads. Over the time the equality of voice has overcome the fake news by standing-out evidences.

“Humanity is the direction for AILabs. Privacy and Integrity are what we insist.” said Ethan.Smart City workshop with Amsterdam Innovation Exchange Lab from Netherlands

SITEC from Malaysia visiting AILabs.tw

Learn from Chi Bo Lin’s view

To love it, one needs to see the beauty of it, as well as its problems, only then can one pray for Taiwan’s future from the heart.

— Chi Po-lin

Chi Po-lin‘s documentary film “Beyond Beauty: Taiwan from Above (看見台灣) captures Taiwan completely in aerial cinematography and broke the Taiwan box office records for the largest opening weekend and the highest total gross of a locally produced documentary. It brings a complete different perspective of understanding the beauty of the land as well as raises awareness of environmental issues which later prompted calls on the government to amend laws and repeal Asia Cement’s mining license.

 

Beyond Beauty: Taiwan from Above official trailer

 

During the press conference of sequel to “Beyond Beauty: Taiwan from Above”, one asked Chi why he did not use drones. He pointed out it is due to the poor image quality and the monotone of the camera movement that he did not consider to make film by drones.

On June 10, 2017, Chi died in a helicopter crash in a mountainous area in Hualien County’s when the group was shooting footage for the sequel. Since using helicopters for aerial cinematography put photographer in tremendous danger, using drones with the aid of artificial intelligence might be worth a try.

We decided to start this project: learning from Chi’s view and shooting the documentary by AI.

 

The helicopter crashed in a mountainous area in Hualien County’s when Chi’s group was shooting footage for the sequel

 

Camera angle is one of the most important factor in producing scenic view. Where the camera is placed in relation to the subject and how the angle is taken can affect the way the viewer perceives the subject and invoke feelings and emotions.

Chi’s film uses different camera angles to achieve landscape videography, in which the majority of them is bird’s-eye view. It constitutes 71 precent of the angles that taking shoots of subjects such as coastlines, paddy fields and cities. This shot gives the audience a wider view and creates a spatial perspective that is rare for human’s viewpoints where objects or human seems harmless and insignificant.

The other part of the documentary consist of both high-angle views and eye-level views. High-angle views are taken when the camera is placed above the subject and lens points down while eye-level views are when the camera is looking straight on with the subject.

 

Composition of camera angles of “Beyond Beauty: Taiwan from Above”

 

Let’s look at the basic camera moves that are used in Chi’s film as well. Two main techniques used are dollying and orbiting, constitute to 31 and 26 percent, respectively. In these techniques, the camera on the helicopter flies along an object, often coastlines or field roads, and move slowly up, down or side-to-side. Sometimes the camera orbits around an object such as lighthouses or mountain tips.

 

Composition of camera movements of “Beyond Beauty: Taiwan from Above”

 

Other moves including tracking shots, pan, tilts and zoom are also applied in the film. We analyze these techniques and usages to understand how a videographer captures the beauty of our lands. These provide our system with the camera strategies and styles to create AI-powered observational documentaries.

We have started collaborations with Tainan City Government, Department of Aeronautics and Astronautics from National Cheng Kung University and GEOSAT Aerospace & Technology Inc to enable AI to shoot a documentary, and learning from “Beyond Beauty: Taiwan from Above” is just the start of it.

 

Feautured Photo by  總統府 / CC BY

, ,

Meet JARVIS – The Engine Behind AILabs

In Taiwan AI Labs, we are constantly teaching computers to see the world, hear the world, and feel the world so that computers can make sense of them and interact with people in exciting new ways. The process requires moving a large amount of data through various training and evaluation stages, wherein each stage consumes a substantial amount of resources to compute. In other words, the computations we perform are both CPU/GPU bound and I/O bound.

This impose a tremendous challenge in engineering such a computing environment, as conventional systems are either CPU bound or I/O bound, but rarely both.

We recognized this need and crafted our own computing environment from day one. We call it Jarvis internally, named after the system that runs everything for Iron Man. It primarily comprises a frontdoor endpoint that accepts media and control streams from the outside world, a cluster master that manages bare metal resources within the cluster, a set of streaming and routing endpoints that are capable of muxing and demuxing media streams for each computing stage, and a storage system to store and feed data to cluster members.

The core system is written in C++ with a Python adapter layer to integrate with various machine learning libraries.

 

 

The design of Jarvis emphasizes realtime processing capability. The core of Jarvis enables data streams flow between computing processors to have minimal latency, and each processing stage is engineered to achieve a required throughput per second. For a long complex procedure, we break it down into smaller sub-tasks and use Jarvis to form a computing pipeline to achieve the target throughput. We also utilize muxing and demuxing techniques to process portions of the data stream in parallel to further increase throughput without incurring too much latency. Once the computational tasks are defined, the blue-print is then handed over to cluster master to allocate underlying hardware resources and dispatch tasks to run on them. The allocation algorithm has to take special care about GPUs, as they are scarce resources that cannot be virtualized at the moment.

Altogether, Jarvis becomes a powerful yet agile platform to perform machine learning tasks. It handles huge amount of work with minimum overhead. Moreover, Jarvis can be scaled up horizontally with little effort by just adding new machines to the cluster. It suits our needs pretty well. We have re-engineered Jarvis several times in the past few months, and will continue to evolve it. Jarvis is our engine to move fast in this fast-changing AI field.

 

Featured image by Nathan Rupert / CC BY

,

AI frontdesk – improve office security and working conditions

Imagine that someone in your office serves as doorkeeper, takes care of visitors and even cares about your working conditions, 24-7? One of our missions at Ailabs.tw is to explore AI solutions to address society’s problems and improve the quality of life of people and, we have developed one AI-powered front-desk to do all of the tasks mentioned above.

Based on 2016 annual report from Taiwan MOL (Ministry of Labor), the average work hours per year of Taiwanese employee is 2106 hours. Compared with OECD stats, this number ranked No.3 in the world which is just below Mexico and Costa Rica.

Recently on 4th, December, 2017,  the first review of the Labor Standards Act revision was passed. The new version of the law will allow flexible work-time arrangements and expand monthly maximum work hours up to 300. Other major changes of the amendment includes conditionally allowing employees to work 12 days in a row and reduction of a minimum 11 hour break between shifts down to 8 hours. The ruling party plans to finish second and third-reading procedure of this revision early next year (2018), and it will put 9-million Taiwanese labors in worse working environment.To get rid off the bad reputation of “Taiwan – The Island of Overwork “, a system which will notify both employee and employer that one has been extremely over-working, and the attendance report can not easily be manipulated is needed.

In May 2017, an employee Luo Yufen from Pxmart, one of Taiwan’s major supermarket chain, died from a long time of overwork after 7 days of being in the state of coma. However, the OSHA(Occupational Safety and Health Administration) initially find no evidence of overwork after reviewing the clocking report provided by Pxmart which looks ‘normal’. It wasn’t until August, when Luo’s case are requested for further investigation, that the Luo’s real working hours before her death proves her overwork condition.

Read more

The Road to Understand Aerial Cinematography

How does AI picks out the best views and shoot an aerial documentary? At Ailabs, we’ve built a system, equipped a drone and a 360 camera, to have an eye of a videographer.

We want to enable filmmaking to achieve camera movement and tracking shot by artificial intelligence itself. We design our system to be able to pick interesting object and features such as lighthouse or coastline from a 360-degree video and create a flat, standard documentary without manual control of camera movement and angle.

 

The drone carries a 360-degree 4K camera and flies alongside Tainan city, Taiwan, to collect videos that are later used for post editing done by our system.

 

This project is inspired by Chi Po-lin‘s documentary film “Beyond Beauty: Taiwan from Above (看見台灣) which captures Taiwan completely in aerial cinematography and broke the Taiwan box office records for the largest opening weekend and the highest total gross of a locally produced documentary. Unfortunately, Chi died in a helicopter crash in a mountainous area in Hualien County’s when the group was shooting footage for the sequel.

Chi’s pointed out it is due to the poor image quality and the monotone of the camera movement that he did not consider to make film by drones. Besides, flying on helicopters for aerial cinematography put photographer in tremendous danger. For this reason, we started the “Chi Po-Lin project” using a drone, a 360 camera and our AI-powered post-editing software algorithm.

Automatic Cinematography for 360

When using a helicopter for videography, the pilot flies the route, and the videographer operates the video camera respectively. In the case of drone,  the videographer is replaced by a 360 camera and an algorithm in post-editing to determines where to focus on in the 360 images.

 

 

As in the scenario above, we installed a 360 camera on the drone to let the AI control the perspective in the 360 image and render a portion of the 360 image to create virtual camera movements such as pan, tilt, and zoom.

 

Algorithmically controlled perspectives from the 360 images 

 

A reason for the monotone of the aerial videos is that it is very difficult for a person to manipulate the controls of the drone and consider the composition details at the same time. Moreover, the conventional methods of automatic cinematography simplify the problem to aligning the camera to the center of the object of interest. So, we give the control of the camera to the AI which, unlike the existing method, takes into account the composition and the semantic flow of the scene.

360 cameras record every point of view. After the videos are collected, the AI recognizes the scenes and objects of interest encountered during the flight, selects the best angles of each moment, and automatically plans multiple sets of suitable trajectories to assist the user in editing the movie.

Automatic Color Enhancement

In order to enhance image quality, we start from color enhancement. We want our video looks as appealing and clearly as professional flim producer took. We leveraged a model which can learn color enhancement by the original input photo and high quality HDR image. The model is learned in an unsupervised way with GAN. In order to reduce model complexity and speed up training, the model is trained with low-resolution image and thus can only output low-resolution image. We extend the model to support high-resolution image by patch-based method. We divide high-resolution image to several overlapping patches and use alpha blending to stitch them. Although it is a image enhancement model, the model is stable enough that we can directly apply it on frames from video without temporal problem. The result looks more appealing than original video in color aspect. According to detail enhancement, result video have more detail than original. For example, the sky looks more clearly than original input video.

Comparison video

Final result with automatic color enhancement

Partners:

– AILabs (台灣人工智慧實驗室)
– Southern Taiwan Science Park Bureau (科技部南科管理局)
– Tainan City Government (台南市政府)
– Department of Aeronautics and Astronautics, NCKU (成大航太)
– GEOSAT Aerospace & Technology Inc (經緯航太)

Sponsored:

– Microsoft
– Nvidia
– Garmin

 

Featured image by YELLOW Mao. 黃毛, Photographer / CC BY