BLOG
About rules without rules. What that means.
Digression regarding games that can be build inside simple intelligent systems.
Continuing the discussion in terms of the PyLogTalk model. [Best to read that first.]
More about the model. [Best to read that next.]
Word count: 2.200 ~ 9 PAGES | Revised: 2018.10.16
REASONING CAPACITY
We discuss testing and default or vacuum datasets. In the above actors agents context.
Summarizing. Reviewing. Some theory.
We'll be able to define pointing from a set of methods to everything that can be built from some n-fold combination of these methods. The way that works in practice is we restrict it to bounds of time. As in can be constructed on the current system before count N in message M reaches X.
Else before message M reaches actor A and gets read and the input copied and the message with the clock halted. Basically we spawn a bunch of actors and randomly send out a messages, but messages that arrive at the actor for which they are not intended, or which lacks a method to respond, just forwards them randomly within the group.
This decomposes into several types of groups. Together in some sequence transitions from behavior classified in one group to another group, in any order, constructs the nexus of the results. Some behaviors are significantly more likely to be present in each nexus.
Meanwhile pointing can also be to any one of these groups.
If there are V-many ingredients and W-fold combinations are allowed [moreover only those that are done within the bounds of the count], then for example, we have a group of all functions consisting of composed methods that result from combination of the same, fixed W-many methods selected from the whole set of V many, only varying their parameters continuously. [Remember actors/agents have a parameters field.]
We define continuous for all practical purposes [FAPP] as in the smallest valid increments. Whatever those are. Eventually we get a case where the parameter is not valid and disables the method. Or nil. Else not provided. The method drops out.
But that can't be, as it leaves, contrary assumption, W-1-many methods being combined. Here we have a continuous transition to another group, with all the same methods, again Wmany, except that method is replaced by another, with the smallest valid parameters defined. The group defines as usual all functions possible by combining those methods.
We cycle in a loop through one of many possible sequences.
They are equivalent if we take the union of results.
[And the dual group of each, for example, would define, given some results of the same type, different types resulting from different parameters continuously varied, and for all possible W, would find the sets --- of valid inputs-at-time-t+1-as-outputs-at-time-t --- requisite for gluing [combining] together --- by sets of valid outputs of one actor considered as input for all tuples --- that construct those results. Except that's a computationally complex problem, and we'll try avoiding that as much as possible.]
Each nexus gives us a default data set --- besides the data set of message trails from use by users --- for machine learning.
Preferences give more weight to certain paths as we decide. Or network system users decide.
From any type of methods we have a path consisting of some combination of them, leading to a valid input to any type of methods. This gives two data sets. A default data set assuming all cases occur. Then a user created data set of cases that actually occur. Each path or mapping has a weight and this is a frequency. A probability. So we get vectors of probabilities and reason from there.
In the current framework as I defined it, this just will involve setting few loops, and that's it, actually pretty easy to get going.
MORE CONCRETELY
Messages are basically actors that are containers for arbitrary programs the system can run.
We simply encapsulate programs very aggressively. In this case anything you can write in the underlying language, you should be able to place inside a message.
Just it doesn't run until several conditions are met. Nor can it be accessed directly, no calling or passing.
For example: one message is a ML script of the most ubiquitous kind.
Another message is data. Or else a reference to location of data.
Typically small either in absolute size of the code, or how long it takes to run, on average, compared to the whole system.
The actor "where" the message arrives is what runs the contents of the message. Parameters can contain a destination. Or several. In which case, if a message with the ML script arrives at D and meant for G, then message, with the clock still running, is forwarded to G by D.
If, for example, G has both the data and the script, it can try two things. Running the data on the script, which gives no output, or running the script on the data. Meanwhile it erases the script and data from its logical memory.
To do the operation again, it would need to "receive" those same two messages again.
Messages, besides a loader, may be encrypted, and most likely unique state invariants of about the correct destination allows them to be decrypted and run. By that destination.
Let's try the following. One message is just an image, png. 128 x 128 say. Another message is a script that counts pixels: 1 added to the count if a < R < b, c < G < d, e < B < f, else 0.
Besides the easily handled asynchrony, concurrency, ability to prove properties, and features like locked messages, this allows us to delay choosing how to process data. If, for example, method M fails to give any useful output, actor A can send messages out to other actors, requesting copies of some of their methods, and then try those until something does work or times out. This opens the door to f ( g ( somedata_DATA ,, h ( i ( k ( moredata_DATA ) ) ) type learning where, in pragmatic manner, decisions can be made out of order. To make better decisions. The system may decide upon f rather than something else before deciding upon, say, g, even though this is run earlier. Indeed we can have more diagrammatic composition, rather than linear. Several things tried, then other things tried. A preferred output, if any, or more than one, finally selected and sent off elsewhere.
Neural net type learning such conceived in coalitional nets is possible at a coarser granularity. I mean: if agents can learn which neighboring agents more often than not provide useful methods when messaged, and which are worth calling with priority. That can be stored by the agents getting data and sending out requests for methods, when their own are not always successful. Meanwhile outputs are evaluated, for example, by sending them to a next set of nets of agents, which later return messages about useful or not useful, based, again for example, on the difficulty of processing those outputs further, or else usefulness to some end user who interacts with them.
Marginal productivity can be assigned.
And to complete that network, the frequencies with which to request aid from other agents based on similarity of input or type of input, if it does not vary by degree, are a neuronal model of the system itself and the input both.
*Now that is interesting and surprising. Because the network was not explicitly constructed. It emerges merely by equipping actors/agents with some pragmatic, stochastic request-for-methods ... methods. They already have memory and allocated some to parameters of this type, which are not discarded after this or that message is processed successfully or unsuccessfully.
ANTICS WITH EXCEPTIONS BASICALLY
Distributed systems which are smart can best be considered as Antics, or systems of actions which are prospectively highly noticeable by other actors, with Accidents or Exceptions.
Maybe we should stop calling it PyLogTalk, and just call it Antics.
Especially if messages can be sent by actors to other actors based on statistics of their behavior inside the system. These can trigger thresholds. If these contain new methods, as we observed above, we can build a very intelligent system.
Or if that is compared to some others anyway.
But some further, surprising comments about this sort of thing. They are merited, I think.
Much as those described in some treatises [WOL02], this is a system, like those, based on several simple rules. It emerges as one coherent, powerful whole from simpler, weaker parts.
(1) We must, however, not make the mistake of thinking it doesn't matter which rules are treated as primary, and that any rules which give such a whole are somehow preferred to no rules. They are not. Which particular rules are selected matters. Just like which genes decide how a system develops in an environment really, really matters. Often no rules and random message passing is far more productive of desired outputs, as can be easily proven, than inappropriate rules, methods, and so on, in systems with antics.
(2) We must not imagine that rules must be strict; they can sometimes be violated [WAT69,85]. It's a statistical system, a very pragmatic one. Only very important rules, which are few, are strictly followed. Others can be broken or ignored in some contexts where this is more appropriate. That is decides, as we saw above, by learning.
This may be the case for most learning systems. Some exceptions exist, even in rule based systems. Even when the rules are correct ones. And that's how it should be.
(3) Sufficiently infrequent violations or exceptions to trivial or unimportant rules, or even simply minor rules, especially in contexts which they system has not learned how to respond to with practice, must absolutely not be punished or pruned or weighted down. Possibly they may be not rewarded. Strict punishment or downweighting ignores context and statistics. But these types of system live and breath contextual exceptions and operate statistically. Strict punishments schemes inside the system will severely degrade performance. Which can also be easily proven and is a neat exercise.
What you get is not so much a rules based system but rules without rules. And that seems to be powerful stuff.
Now think about this may be an excellent underlying platform for games. Especially very lifelike games.
Because life is like that. Rules without rules.
[This essay is getting quite long. Some games that users can play, assuming such a system, will finally be considered in the next post. After all, games are probably the easiest and most natural systems to implement in the above manner. Especially since the standardized message passing makes reasoning about relative merits of strategies feasible. Significantly easier. Which makes balancing easier. A strategy is viable, we may say, if it does not require any kind of belief that in a list of alternatives possibilities this one and not that one, etc, will actually happen. If it will not change if the other one happens instead. That's the case if it already involves a best response to each. Invariant. More on that later. Next post. Complex systems are hard to balance but balance is arguably what leads to fun.]
◕ ‿‿ ◕ つ
#writing #creativity #science #fiction #novel #scifi #publishing #blog
@tribesteemup @thealliance @isleofwrite @freedomtribe @smg
#technology #cryptocurrency #life #history #philosophy
#communicate #freedom #development #future
UPVOTE ! FOLLOW !
#writing #creativity #science #fiction #novel #scifi #publishing #blog
@tribesteemup @thealliance @isleofwrite @freedomtribe @smg
#technology #cryptocurrency #life #history #philosophy
#communicate #freedom #development #future
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License . . . . . . . . . . . . . . . Text and images: ©tibra.
Somewhere at the very top of the text above I put a tag: — Revised: Date.
Leave comments below, with suggestions.
Maybe points to discuss. — As time permits.
Guess what? Meanwhile the length may've doubled . . . ¯\ _ (ツ) _ /¯ . . .
2018.10.16 — POSTED — WORDS: 2.200
Wow. I thought you were talking about LIFE for a moment... "...games that can be build inside simple intelligent systems." LOL. I confess to knowing nothing about gaming and having possibly played only 3 computer games ever in 54 years. But I admire your technical self having this dialogue with people who "get" what you do. :)