# A topology is used in machine learning

## Which topologies are largely unexplored in machine learning?

Geometry and AI

Matrices, cubes, levels, stacks and hierarchies can be exactly as Topologies are called . In this context, consider the topology as the overall geometric design of a learning system.

As the complexity increases, it is often useful to represent these topologies as directed graph structures. State diagrams and Markov's work on game theory are two places where directed graphs are commonly used. Directional diagrams have vertices (often shown as closed shapes) and edges, often shown as arrows that connect the shapes.

We can also represent GANs as a directed graph, where the output of each network controls the training of the other in a controversial manner. GANs are topologically similar to a Möbius strip.

We cannot discover new designs and architectures without understanding not only the math of convergence towards an optimal solution or the pursuit of one, but also the topologies of network connections that can support such convergence. It's like first developing a processor and imagining what an operating system would need before writing the operating system.

To see which topologies we have NOT yet considered, let's first take a look at which ones.

First step - extrusion in a second dimension

Success was achieved in the 1980s with the expansion of the original perceptron design. The researchers added a second dimension to create a layered neural network. A reasonable convergence was achieved by back propagation of the gradient of an error function through the gradients of the activation functions, which were weakened by learning rates and attenuated with other metaparameters.

Step 2 - Adding Dimensions to the Discrete Input Signal

We see the emergence of convolution networks based on existing manually adjusted image convolution techniques and introducing dimensions into the network input: vertical position, color components and frames. This last dimension is critical to CGI, face replacement, and other morphological techniques in contemporary filmmaking. Without it, we have imaging, categorization, and noise removal.

Step three - stack of networks

We see that in the late 1990s stacks of neural networks emerge in which the training of one network is monitored by another. This is the introduction of conceptual layers, neither in the sense of successive layers of neurons nor in the sense of layers of color in an image. This type of overlay is also not a recursion. It's more like the natural world, where one structure is an organ in another completely different type of structure.

Step four - hierarchies of networks

We see that in research from the 2000s and early 2010s (Laplace and others), hierarchies of neural networks frequently emerge that continue the interaction between neural networks and continue the analogy of the mammalian brain. We now see a metastructure in which entire networks become vertices in a directed graph that represents a topology.

Step five% mdash; Deviations from the Cartesian orientation

In the literature, non-Cartesian, systematically repeating arrangements of cells and connections between them have emerged. For example investigate Gauge Equivariant Convolutional Networks and the Icosahedral CNN (Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling, 2019) use an arrangement based on a convex regular icosahedron.

Sum up

Layers usually have valued activation functions for vertices and damping matrices, which are mapped onto an exhaustive set of directed edges between adjacent layers [1]. Image convolution layers are often located in two-dimensional vertex arrangements with attenuation cubes that are mapped onto a shortened set of directed edges between adjacent layers [2]. Stacks have whole layered nets as vertices in a meta-directed graph, and these meta-vertices are connected in a sequence, with each edge being either a training metaparameter, a reinforcement signal (real-time feedback), or some other learning control. Hierarchies of networks reflect the notion that multiple controls can be aggregated and control lower-level learning, or the flip case in which multiple learning elements can be controlled by a higher-level supervisor network.

Analysis of the trend in learning topologies

We can analyze trends in machine learning architecture. We have three topological trends.

• Depth in the causality dimension - layers for signal processing in which the output of an activation layer is fed to the input of the next layer via a matrix of attenuation parameters (weights). When greater controls are established, greater depth cannot be achieved until the basic gradient descent in back propagation.

• Dimensionality of the input signal - From scalar input to hypercubes (video has horizontal, vertical, color depth including transparency and borders - note that this does not match the number of inputs in the perceptron sense.

• Topological Evolution - The two above are Cartesian in nature. Dimensions are added at right angles to the existing dimension. Since networks are wired in hierarchies (like in Laplace hierarchies) and Möbius strips (like in GANs), the trends are topographical and are best represented by directional graphs where the vertices are not neurons but smaller networks of them.

Which topologies are missing?

This section expands the meaning of the title question.

• Is there a reason why multiple meta-vertices, each representing a neural network, can be arranged so that multiple supervisor meta-vertices can collectively monitor multiple employee meta-vertices?
• Why is the back propagation of an error signal the only nonlinear equivalent of negative feedback?
• Couldn't meta-vertex collaboration be used instead of monitoring, where two reciprocal edges represent controls?
• Since neural networks are mainly used for learning nonlinear phenomena, why should other types of closed paths be forbidden in the design of the networks or their connection?
• Is there some reason sound cannot be added to the image so that video clips can be categorized automatically? If so, is a screenplay a possible feature extraction of a film and can some controversial architecture be used to script and produce films without the film studio system? What would this topology look like as a directed graph?
• Although orthogonally arranged cells can simulate any regular packing arrangement of non-orthogonal vertices and edges, is this efficient in image processing when the camera is tilted other than plus or minus 90 degrees?
• Is it efficient to arrange individual cells in networks or networks of cells in AI systems orthogonally in learning systems that aim to understand and combine natural languages ​​or artificial perception?

Remarks

1. Artificial cells in MLPs use floating or fixed point arithmetic transfer functions instead of electrochemical momentum transfers based on the threshold based on amplitude and proximity. They are not realistic simulations of neurons, so calling the vertices Neurons would be a misnomer for this type of analysis.

2. The correlation of image features and relative changes between pixels in close proximity is much higher than that of distant pixels.

Topology is the study of geometric shapes that differ by their intersection and forking. The term is used for the graphic aspects of network architectures. It is natural to use this to look at the extension of the neural network analogy, with the understanding that ANNs are not very similar in their activation to biological neurons. Because of this, it is difficult to limit the discussion to topological concerns, considering what is largely unexplored.

The supervisor-employee paradigm is used by stacks and Laplace hierarchies, while the collaborator paradigm is used by opposing networks. Although the feedback is negative, the Generative Model (G) and the Discriminative Model (D) actually work together to achieve a goal as an advocate of the devil is used in discourse to converge on truths. Certainly other designs are planned in which the corner points are not artificial neurons, but entire ANNs or CNN elements.

The teacher-student and manager-employee paradigms are probably just two of many. In order to simulate neural plasticity, the paradigms of gardener plant, equipment repairman and engineer product must be investigated.

Back propagation of an error signal is not the only nonlinear equivalent of negative feedback. The circular topology of GANs is also negative feedback as you indicated when using the Möbius strip analogy. However, more thought should be given.

The collaboration between meta vertices is interesting. Does the collaboration have to be of the fake adversary type? Can positive feedback be useful in artificial intelligence topologies? Farm owners and truck drivers who distribute groceries buy groceries in supermarkets, which are at the end of a chain of processes whose role is only one part. Larger cycles in directional diagrams of topologies and designs can probably make good use of positive or negative feedback.

The man-made production of films could have come from research like the Cornell U work on video-making from text - Li, Min, Shen, Carlson, and Carin.

Edge of chaos

Declaration for laypeople : -

(https://www.lucd.ai/post/the-edge-of-chaos#!)

The fringes of chaos in chaos theory could be an important topic of research in artificial intelligence.

What's the edge of the mess? This field is believed to exist in a variety of systems. It has many uses in such areas. This field is a transition zone between the interplay of order and disorder.

I'm interested in the interface between AI and chaos theory. The edge of chaos serves as a potential topology that is largely unexplored in machine learning.

This is a rich field that has a lot of potential. It is both largely unknown and underrated.

In this answer, I will examine the benefits of such an analysis. The benefits can be seen in decision making, such as the best way to invest and manage the workforce in a company.

Technical explantation : -

"We can refer to matrices, cubes, levels, stacks and hierarchies as topologies. In this context, consider the topology as the overall geometric design of a learning system." ~ Douglas Daseeco, opening poster

Compare that to this excerpt from the following summary of the paper: -

"... Through dynamic stability analysis on various computer vision models, we find direct evidence that optimal performance of a deep neural network occurs near the transition point between stable and chaotic attractors. ..." Feng, Ling and Choy Heng Lai. - "Optimal machine intelligence on the edge of chaos." arXiv preprint arXiv: 1909.05176 (2019).

- -

“The edge of chaos is a transition space between order and disorder that is believed to exist in a variety of systems. This transition zone is a region of limited instability that creates a constant dynamic interplay between order and disorder.

Although the idea of ​​the edge of chaos is abstract and not intuitive, it has many applications in areas such as ecology, corporate governance, psychology, political science, and other areas of social science. Physicists have shown that adaptation to the edge of chaos in almost all systems is done with feedback. "Wikipedia Contributor." - "Edge of Chaos." Wikipedia, the free encyclopedia . Wikipedia, The Free Encyclopedia, September 10, 2019. Web. 22nd September 2019.

The advantages of such a field :

"[...] strategy, protocol, teams, departments, hierarchies. All carefully organized for optimal performance.

At least that's how it should be. However, when we apply the lens of a complexity theorist to our business, we see that things are a little more complex. We no longer see organizations as organizations or departments as departments, but as complex adaptive systems, which are most helpful to understand in the three parts:

EMPLOYMENT

Use Mental Models to Make Better Decisions at Work Professional life is littered with difficult choices. Am I ready for this PhD? Which of my managers should I choose as a mentor? What should I have for lunch? There is no foolproof way to consistently choose the best course of action - even the best of us make mistakes - but with the right tools, it is possible to maximize your chances of success.

First, employees (to speak of complexity: heterogeneous agents). Every employee has different and evolving decision-making rules that both reflect the environment and try to anticipate changes. Second, people who interact with each other and the structures that create those interactions - scientists call this genesis. Ultimately, the overarching structure is created, which behaves like a higher-level system whose properties and characteristics differ from those of the underlying agents. This last part is why we often say, "The whole is greater than the sum of its parts."

Given the managers' desire for control, complexity is not a convenient reality. Instead of facing the brutal reality of the system they are working to maintain, managers often work in silos, creating models and mechanisms that impose a veneer of security. In this way, they help themselves and their colleagues to make decisions with fewer variables. Achieving the goals set in these models leads to evidence of success - but it is a simplified success that may not be in the best interests of the overall system.

For example, it makes it clear to workers that maximizing shareholder returns is a strict priority: in the event of a difficult compromise, the option that lends itself to immediate profitability is the preferred option. However, we are all aware that reducing expenses and investments to increase short-term margins can be detrimental to the long-term health of a company. Only by taking complexity into account can we effectively balance competing values ​​and priorities (and the impact of decisions on everyone). [...] "- Fresno, Blanca González del." Command from Chaos: Applying Complexity Theory to Work: BBVA. " NEWS BBVA , BBVA, December 4, 2017, .

Sources and references : -

This cannot be part of the subject. If so, delete it.

In electronic circuits we have logical blocks - generators, triggers, memory cells, selectors, alus, fpus, buses and many other chips. And from that we have computers, and from the next level we have computer networks ...

For machine learning we must have a similar organization of things, but if we have 64-bit computers, our neural networks can have them more complex inputs / outputs and more logical functions as defined in any programming language.

For X input bits we have X ^ (2 ^ 2) states for an output bit and 2 ^ X bits for selecting a required logical function.

So we have to investigate these functions consistently and highlight the necessary ones as the first opencv filters such as, for example.

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from.

By continuing, you consent to our use of cookies and other tracking technologies and affirm you're at least 16 years old or have consent from a parent or guardian.