Monte Carlo MethodsMonte Carlo simulations are statistical methods that are used to simulate probabilistic (or "stochastic") systems and calculate the likelihood of different outcomes. Among the techniques for resolving finite Markov decision problems are temporal-difference learning, dynamic programming, and Monte Carlo. Every class of procedures has advantages and disadvantages. Although theoretically sound, dynamic programming techniques need a comprehensive and precise representation of the surrounding environment. Although conceptually straightforward and model-free, Monte Carlo techniques are not well suited for incremental computing carried out step-by-step. Lastly, temporal-difference techniques are entirely incremental and don't need a model, but they are harder to assess. The techniques also vary in a number of ways in terms of convergence speed and efficiency. A Monte Carlo simulation is based on the idea that random variable interference makes it impossible to calculate the probability of different outcomes. In order to accomplish certain goals, a Monte Carlo simulation focuses on repeatedly running random samples. The unknown variable is taken and given a random value via a Monte Carlo simulation. After that, the model is executed and a result is given. The aforementioned procedure is iterated repeatedly, with the variable under consideration being assigned several values. After the simulation runs, an estimate is produced by averaging the data. These days, Monte Carlo simulation is used in a wide range of industries, including the physical sciences, engineering, climate change, radiative forcing, computational biology, computer graphics, applied statistics, artificial intelligence, finance and business, law, mathematics, and more, to help with both automated and human-intermediated decisions involving uncertainty. History of Monte Carlo MethodsPolish-American mathematician Stan Ulam, who was working on the Manhattan Project (also known as the Atomic Bomb Project) at the time, came up with the phrase in the late 1940s. The concept originated from Ulam's endless solitaire games, which he played to pass the time during his convalescence following brain surgery. "I wondered whether a more practical approach than "abstract thinking" might not be to lay it out, say, a hundred times and just observe and count the number of successful plays after spending a lot of time trying to estimate them by pure combinatorial calculations." In fact, such a technique existed since the first general-purpose electronic computer, known as Project Manhattan, was created. One of the project's chief mathematicians, John von Neumann, was instantly struck by the significance of Ulman's theory and realized it might be applied to the Manhattan Project. Together, the two worked on developing the technique. This strategy required a code name because the project was extremely confidential; "Monte Carlo" was selected as a reference to the Monaco gambling town, where Ulam's uncle was a player. Code: Now our task is to create a model of virus spread among people indoors based on some parameters and to determine the confidence intervals for the mathematical expectations of each category of people at time t with a confidence level of 95% for that we will employ Monte Carlo Simulations Input Data
Model DescriptionWe use the Monte Carlo approach to describe the dynamics of the virus in the family. A huge number of computer simulations depending on input parameters form the basis of this approach. Since n is the number of simulations, a huge number, we may use this approach along with the law of large numbers to derive the approximate value of the mathematical expectation. Although the distribution of each group of individuals is unknown, the central limit theorem states that the sample mean's distribution is roughly normal (n>100) due to a sufficient number of tests, but the standard deviation is unknown. We utilize estimations of the viral parameters that scientists have previously determined before modeling since we think the virus does not vary over time. Each family's quarantine level is the only variable that may vary (if everyone is seated in their separate rooms, there is a significantly reduced chance of contracting an illness from another family member). Furthermore, the model does not account for the potential for external contamination from places like stores and churches. When we know all the parameters, the model predicts :
Internal designations:
Importing LibrariesWe run a hundred simulations of our experiment (more simulations mean more accurate results, but at the expense of more time). The software displays many plots at the conclusion, including the standard deviation of each day, a graphical representation of confidence intervals, and the dynamics of the following n_days days from the first coronavirus case in the family (without and with accumulation). An explanation of the plots:
Output: Output: Output: Estimating UsingT-IntervalAdditionally, the confidence intervals are calculated:
Output: Output: Enter the day number for which you want to calculate the intervals and also (1-level of trust). The number of tests required to get the desired interval size (which includes defining the quantile, standard deviation, average, and half-interval size) is what we are searching for. Output: The model's findings aid in understanding why the family has to be under quarantine. It permits the simulation of a scenario without the real occurrence of infection, contingent upon the input parameters. Next TopicWhat is Inverse Reinforcement learning |