@@ -8,7 +8,9 @@ of a wide range of emotional expressions and facial gestures.
8
8
9
9
The scripts are written in OpenCog "atomese", with the intent that this
10
10
enables integration with high-level cognitive, emotional and natural
11
- language-processing functions.
11
+ language-processing functions. The scripts are in active development;
12
+ new designs and design proposals are actively debated on the mailing
13
+ list.
12
14
13
15
The robot emulator is a Blender animation rig. It implements a dozen
14
16
facial expressions, another dozen gestures, such as blinking and
@@ -42,18 +44,23 @@ The this time, the code here integrates three subsystems:
42
44
visible in the room. These faces are localized in 3D space, and
43
45
issued a numeric ID.
44
46
47
+ (This needs to be replaced by a (much) better visual system.)
48
+
45
49
* A collection of "behavior tree" scripts that react to people entering
46
50
and leaving the room. The scripts attempt to interact with the
47
51
people who are visible, by displaying assorted facial expressions.
48
52
53
+ (This needs to be replaced by a library of selections, as described
54
+ in [ README-affects.md] ( README-affects.md ) .
55
+
49
56
* A representation model of the robot self and its surroundings (namely,
50
57
the human faces visible in the room). The goal of this model is
51
58
two-fold:
52
59
53
- ** Allow the robot to be self-aware, and engage in natural language
60
+ ** Allow the robot to be self-aware, and engage in natural language
54
61
dialog about what it is doing.
55
62
56
- ** Enable an "action orchestrater" to manage behaviors coming from
63
+ ** Enable an "action orchestrater" to manage behaviors coming from
57
64
multiple sources.
58
65
59
66
Some things it currently doesn't do, but should:
@@ -66,20 +73,33 @@ Some things it currently doesn't do, but should:
66
73
alpha stages.
67
74
68
75
* Integrate superior face-tracking and face recognition tools.
69
- Right now, the face tracker eats too much CPU, and is completely
70
- unable to recognize known faces.
76
+ Right now, the face tracker is completely unable to recognize known
77
+ faces.
71
78
72
- * Have a GUI tools for editing behavior trees. The XXX tool has been
73
- suggested as such a tool.
79
+ * Have a GUI tools for editing behavior trees. This could be
80
+ accomplised by using the
81
+ [ behavior3js] ( http://behavior3js.guineashots.com/ ) tool.
74
82
75
- * Integration with OpenPsi behavior system.
83
+ * Integration with OpenPsi behavior system. However, see also the
84
+ [ affects proposal] ( README-affects.md ) , which is almost(?) more
85
+ important(?)
76
86
77
- * Enable a memory, via the OpenCog AtomSpace database. The goal here
87
+ * Enable memory, via the OpenCog AtomSpace database. The goal here
78
88
is to remember people and conversations and feelings, between
79
- power-offs and restarts.
89
+ power-offs and restarts. This requires changes to this repo,
90
+ and also writing tools and utilities to simplify the SQL and/or
91
+ file-dump management.
80
92
81
93
* Additional sensory systems and sensory inputs. A perception
82
- synthesizer to coordinate all sensory input.
94
+ synthesizer to coordinate all sensory input. High priority:
95
+
96
+ ++ Audio power envelope, fundamental frequency (of voice),
97
+ rising/falling tone. Background audio power. Length of silent
98
+ pauses. Detection of applause, laughter, load background
99
+ speech, loug bangs.
100
+
101
+ ++ Video-chaos: is there lots of random motion in the visual field,
102
+ or are things visually settled?
83
103
84
104
* Have a much more sophisticated model of the world around it,
85
105
including the humans in it. It should also have better model
0 commit comments