Article Details
Retrieved on: 2018-01-02 17:25:42
Tags for this article:
Click the tags to see associated articles and topics
Excerpt
<div>A joint MIT-IBM team set out to build a very large-scale dataset to help AI systems recognize and understand actions in videos. The dataset contains 1 million three-seconds video clips, each annotated with the actions that occur during the clips. mit-<b>ibm watson</b>, natural language, question answering, ...</div>
Article found on:
This article is found inside other hiswai user's workspaces. To start your own collection, sign up for free.
Sign UpAlready have an account? Log in here