In Mathematics, we always abstract out problems/solution into mathematical objects. These mathematical objects help us to understand the concepts and apply the derived reasoning to other similar problems. These mathematical objects can be graphs, number line, simulation models. So, if you are modeling a move in a chess game, then you don't need physical chess pieces to do that, using mathematics concept you can do it using a "Graphs Model". The vertices of the graph can represent a possible position in the game and the edge between any two vertices can represent a legal move. As below figure shows possible moves of the rook in chess (Horizontal and Vertical).
When devising a model, we abstract out only those features that are essential to understanding its behavior and only deals with objects which are not concrete and tangible. In other use cases, while studying behavior of gas molecules we generalize the model with a closed box with one molecule represented by a point having some velocity and apply some assumptions - like constant velocity, elastic collision, then later, we apply the same thing to 'N' number of molecules and try to predict behaviour of all the gas molecules. This makes our study of gas molecules easy.
This concept of abstraction is like riding a bike, without worrying about keeping balance. In fact, this abstraction is very important in understanding the hidden secrets of the universe. Also, the concepts like complex numbers (i), infinity, can be understood in greater depths, and also helpful in modeling human brain as being done with neural networks.
But can we abstract out our lives in a mathematical concept, so that we can understand it more precisely or maybe predict outcomes in our lives?
If we look out there are no real attempts made in this direction, but a concept called "Markov Decision Process" somehow relate to our lives process. Our lives can be defined as the outcome of decisions that we make every day and the outcomes are almost random with partial control over them.
In fact, Markov Decision Process is used in modeling optimization problems in reinforcement learning, robotics, economics. But human life can easily be abstracted to Markov Decision Process, as this concept provides a framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker.
Simply put, Markov Decision Process is a set of states and actions, together with rules for transitioning from one state to another. As shown in the figure below. On the right is the Markov States and on the left is the reinforcement learning model.
Suppose you are an agent, situated in an environment. The environment is in a certain state. The agent can perform certain actions in the environment. These actions sometimes result in a reward. Actions transform the environment and lead to a new state, where the agent can perform another action, and so on. The rules for how you choose those actions are called policy. The environment, in general, is random.
The above concept is based on Markov assumption, that the probability of the next state depends only on current state and action, but not on preceding states or actions.
And thus, that's what our lives are: outcomes of decision that we make every day and the outcomes are almost random with partial control over them.
This emerging field of Machine Learning called reinforcement learning is kind of future of A.I. It is being used to train robots, make computers adapt to the environment and who knows it may take humans closer to their goals of solving intelligence.
Leave your comments
Post comment as a guest