Availability of live data stream of tracked and categorized objects #209
Replies: 9 comments
-
Hi James, If you haven't done so already, have a look at ITALK project, where you should find plenty of links. |
Beta Was this translation helpful? Give feedback.
-
Hi @JamesKitching, Welcome to Robotology and thanks again for having advertised us at the Google Campus. Let's see how it goes. As @frank-foerster has pointed out, throughout the past years our community has already contributed somehow with a number of projects and activities to the problem you're interested in. In particular, regarding your specific request for accessible live data streams, let me reference some material we produced within the WYSIWYD EU project and that we hosted publicly at Figshare. The datasets we acquired allowed us to experiment with different action recognition algorithms (@dcam0050 may want to chime in). More in details, the WYSIWYD scenario was concerned with the iCub looking at the table with some objects lying on top (a car toy and a octopus toy), while the operator was pointing, lifting/dropping, pushing/pulling those objects. The skeleton of the operator was acquired with a Kinect along with the 3D location and the categories of the objects processed using the robot cameras. I don't know if you'll consider these experiments useful and/or relevant to your research; however, to get a quick impression of what it was all about, just click on the image below to watch a short footage. The data streams were recorded live during the experiment and can be played back in real-time using our tool Of course, the same data could be provided by means of a more general purpose interface - as you mentioned - and Yarp is up to the task for this kind of development since enables an easy intercommunication with existing packages. That being said, we would need time, resources and opportunities for getting to that goal: it is, therefore, a matter of finding the right contributor (it could be even you 😉). |
Beta Was this translation helpful? Give feedback.
-
Hi Ugo (@pattacini) and @frank-foerster Thanks both for your replies. @frank-foerster @pattacini Might you or someone else be interested in discussing publishing ideas on linguistics, concept modeling, virtual 3D simulation, discrete event simulation, universal language acquisition, universal grammar and artificial consciousness? I am out this evening so will not be posting much later. Thanks James |
Beta Was this translation helpful? Give feedback.
-
Those resources on ITALK have been migrated; try this http://wiki.icub.org/italk. You could get even more with a quick search. |
Beta Was this translation helpful? Give feedback.
-
Hi @pattacini Thanks for the reply. The repo download instructions on this page still refers to http://eris.liralab.it/wiki/Getting_the_italk_software_svn which is broken. I did manage to find the migrated link by going to here: http://wiki.icub.org/wiki/Getting_the_italk_software_svn. However these instructions say I should execute the following command: But I get an error when I tried this: svn: E170013: Unable to connect to a repository at URL 'https://svn.cognitivehumanoids.eu/italk/trunk/italk' I was also not able to browse this repo. I did find that https://cognitivehumanoids.eu/ does exist which referred me to https://talk.iit.it/en/ but there was no mention of a repository. I am still drafting my paper, I will share a draft of this on my blog (http://hemseye.org) and update you here when I am ready. Thanks |
Beta Was this translation helpful? Give feedback.
-
This is my draft abstract for my paper: This paper describes how Semantics Primes can be used as an as yet unexplored route to modelling and implementing conceptual language understanding in an artificial intelligence. Semantic primes are a set of sixty five language universal basic conceptual words which cannot be dictionary defined using simpler words. Semantic primes are universal as they appear in every language. A vocabulary of just semantic primes has been shown to be sufficient to reword any sentence without any loss of meaning. A vocabulary of just Semantic Prime words has also be shown to be capable of creating a dictionary definition of any other more complex word. Each semantic prime concept has a limited set of dependencies, requirements, possible actions and ways that it can be expressed and used with other concepts. Each semantic prime therefore implies and carries with it its own set of grammar rules. These language universal conceptual grammar rules have been name “Semantic Meta Language” and have been described in a two volume report edited by Wierzbicka and Goddard but produced by a number of other linguists studying multiple languages. Semantic Meta-Language is a universal implied grammar for the universal semantic prime vocabulary. |
Beta Was this translation helpful? Give feedback.
-
In my paper I go on to make some further links to earlier work before describing how a virtual model of each semantic prime expressed as BPMN (business process modeling notation) could be used to recognise the grounded context and use of each semantic model in a live linked robotic virtual world. In effect these semantic prime BPMN models become like Skinner Boxes (see https://en.wikipedia.org/wiki/Operant_conditioning_chamber) that a consciousness can recognise. A consciousness can choose to manipulate, respond to, or concatenate a BPMN "Skinner Box" with other "Skinner Box" like BPMN models of semantic primes before actioning them. -- I need to work on my explanation of the justification for lots of additional ideas. |
Beta Was this translation helpful? Give feedback.
-
Hi @JamesKitching, Please refer to https://svn.rbcs.iit.it/italk/trunk/italk instead, to get the ITALK software. The migration happened several times in the past and from servers we own to servers maintained by other groups: sorry for the jumble. I've updated the wiki pages pointed above accordingly. I've also seen you're sketching out your approach to Semantic Primes. It seems promising but it's not really my field of research; that's the reason why I suggested you post your ideas here, in the hope others will show up. You could always contact people directly, possibly those who participated in ITALK, for example. |
Beta Was this translation helpful? Give feedback.
-
Hi @pattacini, Thanks for the update. I think I will try approaching some grant awarders with what I have got and also approach some experts at the same time. I mentioned BPMN which I think will form part of the solution. This is a technology I first met whilst working for the major discrete event modeling software tool supplier. I think my background is particularly well suited to developing these ideas further, which is what I want to do. I think there is a risk in advertising the ideas too widely and taking too long in getting funding as I might find my interests have started become more common place. However if anyone is interested in my interests and how they have developed you could check out http://www.hemseye.org. I am pretty confident that my suggested initial approach as outline above is new. I would be very interested to hear from anyone willing and able to help me with advice on approaching grant holders and in developing my ideas further. Thanks |
Beta Was this translation helpful? Give feedback.
-
I'm new to
robotology
. Thanks Ugo for signing me up.@pattacini I discussed with you by email earlier how I would be trying to promote open source interest in iCub.
Last week I attended the Google UK community group leaders of local meetup groups with an interest in machine learning at the Google London Campus. The Google day went pretty well. The 40 group leaders represent about 5000 UK community group members each with an interest in machine learning. As discussed with you, I did raise the idea of signing up to work on
robotology
at this meeting. I did not as yet get anyone expressing any explicit interest in joining the iCub project. I have not as yet received feedback the contact list follow up from this meeting so, hopefully, we will get some interest when I can pass the word on my generally.I did get quite a lot of interest for my ideas about modeling a small set of basic universal word concepts for artificially conscious contextual understanding and recognition. I was very very pleased to hear from one Computational Linguist academic whose reaction was "Yes I can see that could work, I wonder what Chompsky would say...". I want to do some more work on this project before I look at how to plug it into iCub. I also think I should probably finish running all the iCub tutorials before I ask too many questions that might be answered by doing this.
If anyone does have the time and is very helpful I am going to want to ask about whether there is currently access to a live data stream containing the identified location and categorized and positionally tracked objects relative to the AI's position within the location. I appreciate that this is an ongoing area of research, however, an intent to provide a certain interface perhaps at some point in the future would be useful. There has previously been work done in this area with RoboEarth and KnowRob etc. Has there been any iCub interest in engaging with these projects? If not why not?
Thanks very much - I will be in contact soonish hopefully.
Beta Was this translation helpful? Give feedback.
All reactions