Please check our Instructions to Authors and send your manuscripts to nifs.journal@gmail.com. Next issue: September/October 2024.
Deadline for submissions: 16 November 2024.
Help:Sandbox: Difference between revisions
Removing all content from page |
No edit summary |
||
Line 1: | Line 1: | ||
'''Методите Монте Карло''' са клас [[изчисление|изчислителни]] [[алгоритъм|алгоритми]], които се основават на повтарящи се [[случайност|случайни]] извадки за изчисление на резултатите. Тези методи често се използват в [[компютърна симулация|симулирането]] на [[физика|физически]] и [[математика|математически]] системи. Тъй като разчитат на многократните изчисления на случайни или [[псевдослучайност|псевдослучайни]] числа, тези методи са най-пригодни за изчисление с [[компютър]] и обичайно се ползват в случаи, когато е неприемливо или невъзможно изчисляването на точен резултат с [[детерминиран алгоритъм]].<ref name="Measure Anything pg. 46">Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007</ref> | |||
Симулационните методи от вида Монте Карло са особено полезни при изучаването на системи с голям брой [[coupling (physics)|coupled]] степени на свобода, като флуиди, disordered materials, strongly coupled solids, и клетъчни структури (вж [[cellular Potts model]]). По-общо, методите Монте Карло намират приложение за моделиране на явления със значителна степен на [[несигурност]] на входа, каквито са пресмятанията на [[риск]]а във финансите и бизнеса. Тези методи също са широко използвани в математиката: класически пример за употреба е оценяването на [[определен интеграл|определени интеграли]], и в частност многомерни интеграли с усложнени гранични условия. Когато се прилагат Монте Карло симулации в космическите изследвания и при търсенето на нефт и природен газ, преразходите на бюджета и просрочването на графика се предсказват рутинно по-добре посредством тези симулации, отколкото на база експертен опит и интуиция или алтернативни методи от [[софт компютинг]]а.<ref>Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It", John Wiley & Sons, 2009</ref> | |||
Терминът "метод Монте Карло" е лансиран през 1940-те години от физици, работещи над проекти за ядрени оръжия в [[Национална лаборатория в Лос Аламос|Националната лаборатория в Лос Аламос]].<ref>{{Cite journal|author=[[Nicholas Metropolis]]|url=http://library.lanl.gov/la-pubs/00326866.pdf|title=The beginning of the Monte Carlo method|journal=Los Alamos Science|issue=1987 Special Issue dedicated to Stanislaw Ulam|pages=125–130|year=1987}}</ref> | |||
== Същност на метода == | |||
[[Image:Monte carlo method.svg|thumb|right|Методът Монте Карло може да се илюстрира като игра на [[морски бой]]. First a player makes some random shots. Next the player applies algorithms (i.e. a battleship is four dots in the vertical or horizontal direction). Finally based on the outcome of the random sampling and the algorithm the player can determine the likely locations of the other player's ships. ]] | |||
There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern: | |||
# Define a domain of possible inputs. | |||
# Generate inputs randomly from the domain using a certain specified probability distribution. | |||
# Perform a [[deterministic]] computation using the inputs. | |||
# Aggregate the results of the individual computations into the final result. | |||
For example, the value of [[pi|<math>\pi</math>]] can be approximated using a Monte Carlo method: | |||
# Draw a square on the ground, then [[inscribed figure|inscribe]] a circle within it. From plane geometry, the ratio of the area of an inscribed circle to that of the surrounding square is <math>\pi/4</math>. | |||
# [[uniform distribution (continuous)|Uniformly]] scatter some objects of uniform size throughout the square. For example, grains of rice or sand. | |||
# Since the two areas are in the ratio <math>\pi/4</math>, the objects should fall in the areas in approximately the same ratio. Thus, counting the number of objects in the circle and dividing by the total number of objects in the square will yield an approximation for <math>\pi/4</math>. | |||
# Multiplying the result by 4 will then yield an approximation for <math>\pi</math> itself. | |||
Notice how the <math>\pi</math> approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of <math>\pi</math>. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of <math>\pi</math> will become more accurate both as the grains are dropped more uniformly and as more are dropped. | |||
== История == | |||
<!-- This is mentioned in a discussion of precursors, 2 paragraphs down. --- | |||
An early variant of the Monte Carlo method can be seen in the [[Buffon's needle]] experiment. --> | |||
[[Enrico Fermi]] in the 1930s and [[Stanisław Ulam]] in 1946 first had the idea. Ulam later contacted [[John von Neumann]] to work on it.<ref>http://people.cs.ubc.ca/~nando/papers/mlintro.pdf</ref> | |||
Physicists at [[Los Alamos Scientific Laboratory]] were investigating [[radiation shielding]] and the distance that [[neutrons]] would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus or how much energy the neutron was likely to give off following a collision, the problem could not be solved with analytical calculations. John von Neumann and Stanislaw Ulam suggested that the problem be solved by modeling the experiment on a computer using chance. Being secret, their work required a code name. Von Neumann chose the name "Monte Carlo". The name is a reference to the [[Monte Carlo Casino]] in [[Monaco]] where Ulam's uncle would borrow money to gamble.<ref name="Measure Anything pg. 46"/><ref>Charles Grinstead & J. Laurie Snell "Introduction to Probability" pp. 10-11, American Mathematical Society, 1997</ref><ref>H.L. Anderson, [http://library.lanl.gov/cgi-bin/getfile?00326886.pdf "Metropolis, Monte Carlo and the MANIAC,"] Los Alamos Science, no. 14, pp. 96-108, 1986.</ref> | |||
Random methods of computation and experimentation (generally considered forms of [[stochastic simulation]]) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., [[Buffon's needle]], and the work on small samples by [[William Sealy Gosset]]), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by ''first'' finding a [[probabilistic]] [[meta-algorithm|analog]] (see [[Simulated annealing]]). Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. | |||
Monte Carlo methods were central to the [[simulation]]s required for the [[Manhattan Project]], though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at [[Los Alamos National Laboratory|Los Alamos]] for early work relating to the development of the [[hydrogen bomb]], and became popularized in the fields of [[physics]], [[physical chemistry]], and [[operations research]]. The [[Rand Corporation]] and the [[U.S. Air Force]] were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. | |||
Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of [[pseudorandom number generator]]s, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. | |||
== Приложения == | |||
As mentioned, Monte Carlo simulation methods are especially useful for modeling phenomena with significant [[uncertainty]] in inputs and in studying systems with a large number of [[coupling (physics)|coupled]] degrees of freedom. Specific areas of application include: | |||
=== Приложения във физическите науки === | |||
Monte Carlo methods are very important in [[computational physics]], [[physical chemistry]], and related applied fields, and have diverse applications from complicated [[quantum chromodynamics]] calculations to designing [[heat shield]]s and [[aerodynamics|aerodynamic]] forms. The Monte Carlo method is widely used in [[statistical physics]], particularly [[Monte Carlo molecular modeling]] as an alternative for computational [[molecular dynamics]] as well as to compute [[statistical field theory|statistical field theories]] of simple particle and polymer models <ref>{{Cite journal|author=[[Stephan A. Baeurle]]|url=http://www.springerlink.com/content/xl057580272w8703/|title=Multiscale modeling of polymer materials using field-theoretic methodologies: a survey about recent developments|journal=Journal of Mathematical Chemistry|volume=46|issue=2|pages=363–426|year=2009|doi=10.1007/s10910-008-9467-3}}</ref>; see [[Monte Carlo method in statistical physics]]. In experimental [[particle physics]], these methods are used for designing [[particle detector|detectors]], understanding their behavior and comparing experimental data to theory, or on vastly large scale of the [[galaxy]] modelling.<ref>H. T. MacGillivray, R. J. Dodd, Monte-Carlo simulations of galaxy systems, ''Astrophysics and Space Science'', Volume 86, Number 2 / September, 1982, Springer Netherlands [http://www.springerlink.com/content/rp3g1q05j176r108/fulltext.pdf]</ref> | |||
Monte Carlo methods are also used in the [[Ensemble forecasting|ensemble models]] that form the basis of modern [[Numerical weather prediction|weather forecasting]] operations. | |||
=== Design and visuals === | |||
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in [[global illumination]] computations which produce photorealistic images of virtual 3D models, with applications in [[video game]]s, [[architecture]], [[design]], computer generated [[film]]s, and cinematic special effects. | |||
=== Приложения във финансите и бизнеса === | |||
[[Monte Carlo methods in finance]] are often used to calculate the value of companies, to evaluate investments in projects at a business unit or corporate level, or to evaluate financial derivatives. Monte Carlo methods used in these cases allow the construction of stochastic or probabilistic financial models as opposed to the traditional static and deterministic models, thereby enhancing the treatment of uncertainty in the calculation. For use in the insurance industry, see [[stochastic modelling]]. | |||
=== Телекомуникации === | |||
When planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process. | |||
=== Игри === | |||
Monte Carlo methods have recently been applied in game playing related [[artificial intelligence]] theory. Most notably the game of [[Go (game)|Go]] and Battleship and have seen remarkably successful Monte Carlo algorithm based computer players. One of the main problems that this approach has in game playing is that it sometimes misses an isolated, very good move. These approaches are often strong strategically but weak tactically, as tactical decisions tend to rely on a small number of crucial moves which are easily missed by the randomly searching Monte Carlo algorithm. | |||
== Monte Carlo simulation versus “what if” scenarios == | |||
The opposite of Monte Carlo simulation might be considered deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Various combinations of each input variable are manually chosen (such as best case, worst case, and most likely case), and the results recorded for each so-called “what if” scenario.<ref name=Vose>David Vose: “Risk Analysis, A Quantitative Guide,” Second Edition, p. 13, John Wiley & Sons, 2000.</ref> | |||
By contrast, Monte Carlo simulation considers random sampling of [[probability distribution]] functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring.<ref>Ibid, p. 16</ref> | |||
For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis. This is because the “what if” analysis gives equal weight to all scenarios.<ref>Ibid, p. 17, showing graph</ref> | |||
For further discussion, see [[Corporate_finance#Quantifying_uncertainty|quantifying uncertainty]] under [[corporate finance]]. | |||
== Приложение в математиката == | |||
In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers which obeys some property or properties. The method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration. | |||
=== Интегриране === | |||
{{main|Monte Carlo integration}} | |||
Deterministic methods of [[numerical integration]] usually operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of [[vector space|vector]]s, deterministic quadrature methods can be very inefficient. To numerically integrate a function of a two-dimensional vector, equally spaced grid points over a two-dimensional surface are required. For instance a 10x10 grid requires 100 points. If the vector has 100 dimensions, the same spacing on the grid would require [[googol|10<sup>100</sup>]] points—far too many to be computed. 100 [[dimension]]s is by no means unusual, since in many physical problems, a "dimension" is equivalent to a [[degrees of freedom (physics and chemistry)|degree of freedom]]. (See [[Curse of dimensionality]].) | |||
Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably [[well-behaved]], it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the [[law of large numbers]], this method will display <math>1/\sqrt{N}</math> convergence—i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions. | |||
A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below. | |||
A similar approach involves using [[low-discrepancy sequence]]s instead—the [[quasi-Monte Carlo method]]. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly. | |||
==== Методи за интегриране ==== | |||
* Direct sampling methods | |||
** [[Importance sampling]] | |||
** [[Stratified sampling]] | |||
** [[Monte_Carlo_integration#Recursive_stratified_sampling|Recursive stratified sampling]] | |||
** [[VEGAS algorithm]] | |||
* [[Random walk Monte Carlo]] including [[Markov chain]]s | |||
** [[Metropolis-Hastings algorithm]] | |||
* [[Gibbs sampling]] | |||
=== Оптимизация === | |||
Most Monte Carlo optimization methods are based on [[random walk]]s. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the [[gradient]]. | |||
Another powerful and very popular application for random numbers in numerical simulation is in [[optimization (mathematics)|numerical optimization]]. These problems use functions of some often large-dimensional vector that are to be minimized (or maximized). Many problems can be phrased in this way: for example a [[computer chess]] program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The [[traveling salesman problem]] is another optimization problem. There are also applications to engineering design, such as [[multidisciplinary design optimization]]. | |||
==== Методи за оптимизация ==== | |||
* [[Evolution strategy]] | |||
* [[Genetic algorithm]]s | |||
* [[Parallel tempering]] | |||
* [[Simulated annealing]] | |||
* [[Stochastic optimization]] | |||
* [[Stochastic tunneling]] | |||
=== Inverse problems === | |||
Probabilistic formulation of [[inverse problem]]s leads to the definition of a [[probability distribution]] in the model space. This probability distribution combines [[a priori (statistics)|''a priori'']] information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). | |||
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. | |||
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995),<ref>http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/Papers_PDF/MonteCarlo_latex.pdf</ref> or Tarantola (2005).<ref>http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html</ref> | |||
=== Изчислителна математика === | |||
Monte Carlo methods are useful in many areas of computational mathematics, where a ''lucky choice'' can find the correct result. A classic example is [[Miller-Rabin primality test|Rabin's algorithm]] for primality testing: for any ''n'' which is not prime, a random ''x'' has at least a 75% chance of proving that ''n'' is not prime. Hence, if ''n'' is not prime, but ''x'' says that it might be, we have observed at most a 1-in-4 event. If 10 different random ''x'' say that "''n'' is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee '''''n'' is composite, and ''x'' proves it so''', but another one without, but with a guarantee of not getting this answer when it is wrong '''too often''' — in this case at most 25% of the time. See also [[Las Vegas algorithm]] for a related, but different, idea. | |||
== Монте Карло и случайните числа == | |||
Interestingly, Monte Carlo simulation methods do not always require truly [[random number]]s to be useful — while for some applications, such as [[primality testing]], unpredictability is vital (see Davenport (1995)).<ref>{{cite web|last=Davenport |first=J. H. |authorlink= |coauthors= |title=Primality testing revisited |work= |publisher= |date= |url=http://doi.acm.org/10.1145/143242.143290 |format= |doi=http://doi.acm.org/10.1145/143242.143290 |accessdate=2007-08-19 |quote = }}</ref> Many of the most useful techniques use deterministic, [[pseudo-random]] sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good [[simulation]]s is for the pseudo-random sequence to appear "random enough" in a certain sense. | |||
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are [[uniform distribution|uniformly distributed]] or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones. | |||
== Вижте също == | |||
* [[Auxiliary field Monte Carlo]] | |||
* [[Bootstrapping (statistics)]] | |||
* [[Demon algorithm]] | |||
* [[Еволюционно изчисление]] | |||
* [[FERMIAC]] | |||
* [[Верига на Марков]] | |||
* [[Молекулярна динамика]] | |||
* [[Monte Carlo option model]] | |||
* [[Monte Carlo integration]] | |||
* [[Quasi-Monte Carlo method]] | |||
* [[Генератор на случайни числа]] | |||
* [[Случайност]] | |||
* [[Resampling (statistics)]] | |||
<!-- | |||
===Application areas=== | |||
* Graphics, particularly for [[Ray tracing (graphics)|ray tracing]]; a version of the [[Metropolis-Hastings algorithm]] is also used for ray tracing where it is known as [[Metropolis light transport]] | |||
* [[Monte Carlo method for photon transport|Modeling light transport in biological tissue]] | |||
* [[Monte Carlo methods in finance]] | |||
* [[Reliability engineering]] | |||
* In simulated annealing for [[protein structure prediction]] | |||
* In semiconductor device research, to model the transport of current carriers | |||
* Environmental science, dealing with contaminant behavior | |||
* Search And Rescue and Counter-Pollution. Models used to predict the drift of a life raft or movement of an oil slick at sea. | |||
* In [[probabilistic design]] for simulating and understanding the effects of variability | |||
* In [[physical chemistry]], particularly for simulations involving atomic clusters | |||
* In [[List of nucleic acid simulation software|biomolecular simulations]] | |||
* In [[polymer physics]] | |||
** [[Bond fluctuation model]] | |||
*In computer science | |||
** [[Monte Carlo algorithm]] | |||
** [[Las Vegas algorithm]] | |||
** [[LURCH]] | |||
** [[Computer go]] | |||
**[[General Game Playing]] | |||
* Modeling the movement of impurity atoms (or ions) in plasmas in existing and tokamaks (e.g.: DIVIMP). | |||
* Nuclear and particle physics codes using the Monte Carlo method: | |||
** [[GEANT (program)|GEANT]] — [[European Organization for Nuclear Research|CERN]]'s simulation of high energy particles interacting with a detector. | |||
** [[FLUKA]] — [[Istituto Nazionale di Fisica Nucleare|INFN]] and [[European Organization for Nuclear Research|CERN]]'s simulation package for the interaction and transport of particles and nuclei in matter | |||
** [[SRIM]], a code to calculate the penetration and energy deposition of ions in matter. | |||
** [[CompHEP]], [[PYTHIA]] — Monte-Carlo generators of particle collisions | |||
** [[Monte Carlo N-Particle Transport Code|MCNP(X)]] - LANL's radiation transport codes | |||
** [[Monte Carlo Universal|MCU]]: universal computer code for simulation of particle transport (neutrons, photons, electrons) in three-dimensional systems by means of the Monte Carlo method | |||
** [[EGS (program)|EGS]] — [[SLAC|Stanford]]'s simulation code for coupled transport of electrons and photons | |||
** [[PEREGRINE]]: LLNL's Monte Carlo tool for radiation therapy dose calculations | |||
** [[BEAMnrc]] — Monte Carlo code system for modeling radiotherapy sources ([[Linear particle accelerator|LINAC]]'s) | |||
** [[PENELOPE]] — Monte Carlo for coupled transport of photons and electrons, with applications in radiotherapy | |||
** [[MONK]] — Serco Assurance's code for the calculation of [[Neutron multiplication factor|k-effective]] of nuclear systems | |||
* Modelling of [[foam]] and cellular structures | |||
* Modeling of [[biological tissue|tissue]] [[morphogenesis]] | |||
* Computation of [[hologram]]s | |||
* [[Phylogenetics|Phylogenetic analysis]], i.e. [[Bayesian inference]], [[Markov chain Monte Carlo]] | |||
=== Other methods employing Monte Carlo=== | |||
* Assorted random models, e.g. [[self-organized criticality]] | |||
* [[Direct simulation Monte Carlo]] | |||
* [[Dynamic Monte Carlo method]] | |||
* [[Kinetic Monte Carlo]] | |||
* [[Quantum Monte Carlo]] | |||
* [[Quasi-Monte Carlo method]] using [[low-discrepancy sequence]]s and self avoiding walks | |||
* Semiconductor charge transport and the like | |||
* [[Electron microscopy]] beam-sample interactions | |||
* [[Stochastic optimization]] | |||
* [[Cellular Potts model]] | |||
* [[Markov chain Monte Carlo]] | |||
* [[Cross-entropy method]] | |||
* [[Applied information economics]] | |||
* [[Monte Carlo localization]] | |||
* [[Evidence-based Scheduling]] | |||
* [[Binary collision approximation]] | |||
--> | |||
== Източници == | |||
<references /> | |||
{{Ibid|date=March 2010}} | |||
*{{cite journal |last=Metropolis |first=N. |authorlink= |coauthors=Ulam, S. |year=1949 |month= |title=The Monte Carlo Method |journal=Journal of the American Statistical Association |volume=44 |issue=247 |pages=335–341 |doi=10.2307/2280232 |url= http://jstor.org/stable/2280232|accessdate= |quote= |pmid=18139350 |publisher=American Statistical Association }} | |||
*{{cite journal |last=Metropolis |first=Nicholas |authorlink= |coauthors=Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward |year=1953 |month= |title=[[Equation of State Calculations by Fast Computing Machines]] |journal=Journal of Chemical Physics |volume=21 |issue=6 |pages=1087 |doi=10.1063/1.1699114 |url= |accessdate= |quote= }} | |||
*{{cite book |title=Monte Carlo Methods |last=Hammersley |first=J. M. |authorlink= |coauthors=Handscomb, D. C. |year=1975 |publisher=Methuen |location=London |isbn=0416523404 |page= |pages= |url= }} | |||
*{{cite book |title=Judgement under Uncertainty: Heuristics and Biases |last=Kahneman |first=D. |authorlink= |coauthors=Tversky, A. |year=1982 |publisher=Cambridge University Press |location= |isbn= |page= |pages= |url= }} | |||
*{{cite book |title=An Introduction to Computer Simulation Methods, Part 2, Applications to Physical Systems |last=Gould |first=Harvey |authorlink= |coauthors=Tobochnik, Jan |year=1988 |publisher=Addison-Wesley |location=Reading |isbn=020116504X |page= |pages= |url= }} | |||
*{{cite book |title=The Monte Carlo Method in Condensed Matter Physics |last=Binder |first=Kurt |authorlink=Kurt Binder |coauthors= |year=1995 |publisher=Springer |location=New York |isbn=0387543694 |page= |pages= |url= }} | |||
*{{cite book |title=Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code) |last=Berg |first=Bernd A. |authorlink= |coauthors= |year=2004 |publisher=World Scientific |location=Hackensack, NJ |isbn=9812389350 |page= |pages= |url= }} | |||
*{{cite book |title=Monte Carlo and quasi-Monte Carlo methods |last=Caflisch |first=R. E. |authorlink= |coauthors= |year=1998 |series=Acta Numerica |volume=7 |publisher=Cambridge University Press |location= |isbn= |page= |pages=1–49 |url= }} | |||
*{{cite book |title=Sequential Monte Carlo methods in practice |last=Doucet |first=Arnaud |authorlink= |coauthors=Freitas, Nando de; Gordon, Neil |year=2001 |publisher=Springer |location=New York |isbn=0387951466 |page= |pages= |url= }} | |||
*{{cite book |title=Monte Carlo: Concepts, Algorithms, and Applications |last=Fishman |first=G. S. |authorlink= |coauthors= |year=1995 |publisher=Springer |location=New York |isbn=038794527X |page= |pages= |url= }} | |||
*{{cite book |title=Stochastic Simulation in Physics |last=MacKeown |first=P. Kevin |authorlink= |coauthors= |year=1997 |publisher=Springer |location=New York |isbn=9813083263 |page= |pages= |url= }} | |||
*{{cite book |title=Monte Carlo Statistical Methods |last=Robert |first=C. P. |authorlink= |coauthors=Casella, G. |year=2004 |edition=2nd |publisher=Springer |location=New York |isbn=0387212396 |page= |pages= |url= }} | |||
*{{cite book |title=Simulation and the Monte Carlo Method |last=Rubinstein |first=R. Y. |authorlink= |coauthors=Kroese, D. P. |year=2007 |edition=2nd |publisher=John Wiley & Sons |location=New York |isbn=9780470177938 |page= |pages= |url= }} | |||
*{{cite journal |last=Mosegaard |first=Klaus |authorlink= |coauthors=Tarantola, Albert |year=1995 |month= |title=Monte Carlo sampling of solutions to inverse problems |journal=J. Geophys. Res. |volume=100 |issue=B7 |pages=12431–12447 |doi=10.1029/94JB03097 |url= |accessdate= |quote= }} | |||
*{{cite book |title=Inverse Problem Theory |last=Tarantola |first=Albert |authorlink= |coauthors= |year=2005 |publisher=Society for Industrial and Applied Mathematics |location=Philadelphia |isbn=0898715725 |page= |pages= |url=http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html }} | |||
== Външни препратки == | |||
<!--========================({{No More Links}})============================ | |||
| PLEASE BE CAUTIOUS IN ADDING MORE LINKS TO THIS ARTICLE. WIKIPEDIA | | |||
| IS NOT A COLLECTION OF LINKS NOR SHOULD IT BE USED FOR ADVERTISING. | | |||
| | | |||
| Excessive or inappropriate links WILL BE DELETED. | | |||
| See [[Wikipedia:External links]] & [[Wikipedia:Spam]] for details. | | |||
| | | |||
| If there are already plentiful links, please propose additions or | | |||
| replacements on this article's discussion page, or submit your link | | |||
| to the relevant category at the Open Directory Project (dmoz.org) | | |||
| and link back to that category using the {{dmoz}} template. | | |||
=======================({{No More Links}})=============================--> | |||
*[http://mathworld.wolfram.com/MonteCarloMethod.html Overview and reference list], Mathworld | |||
*[http://www.phy.ornl.gov/csep/CSEP/MC/MC.html Introduction to Monte Carlo Methods], Computational Science Education Project | |||
*[http://www.sitmo.com/eqcat/15 Overview of formulas used in Monte Carlo simulation], the Quant Equation Archive, at sitmo.com | |||
*[http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html The Basics of Monte Carlo Simulations], [[University of Nebraska-Lincoln]] | |||
*[http://office.microsoft.com/en-us/assistance/HA011118931033.aspx Introduction to Monte Carlo simulation] (for [[Microsoft Excel|Excel]]), Wayne L. Winston | |||
*[http://www.brighton-webs.co.uk/montecarlo/concept.asp Monte Carlo Methods - Overview and Concept], brighton-webs.co.uk | |||
*[http://www.cooper.edu/engineering/chemechem/monte.html Molecular Monte Carlo Intro], [[Cooper Union]] | |||
*[http://www.princeton.edu/~achremos/Applet1-page.htm Monte Carlo techniques applied in physics] | |||
*[http://www.global-derivatives.com/maths/k-o.php MonteCarlo Simulation in Finance], global-derivatives.com | |||
*[http://twt.mpei.ac.ru/MAS/Worksheets/approxpi.mcd Approximation of π with the Monte Carlo Method] | |||
*[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=265905 Risk Analysis in Investment Appraisal], The Application of Monte Carlo Methodology in Project Appraisal, Savvakis C. Savvides | |||
*[http://en.wikiversity.org/wiki/Probabilistic_Assessment_of_Structures Probabilistic Assessment of Structures using the Monte Carlo method], Wikiuniversity paper for students of Structural Engineering | |||
*[http://www.puc-rio.br/marco.ind/quasi_mc.html A very intuitive and comprehensive introduction to Quasi-Monte Carlo methods] | |||
*[http://knol.google.com/k/giancarlo-vercellino/pricing-using-monte-carlo-simulation/11d5i2rgd9gn5/3# Pricing using Monte Carlo simulation], a practical example, Prof. Giancarlo Vercellino | |||
<!-- | |||
[[Category:Monte Carlo methods| ]] | |||
[[Category:Randomness]] | |||
[[Category:Numerical analysis]] | |||
[[Category:Statistical mechanics]] | |||
[[Category:Computational physics]] | |||
[[Category:Sampling techniques]] | |||
[[Category:Statistical approximations]] | |||
[[Category:Probabilistic complexity theory]] | |||
[[ar:طريقة مونت كارلو]] | |||
[[ca:Mètode de Monte Carlo]] | |||
[[cs:Metoda Monte Carlo]] | |||
[[da:Monte Carlo-metoder]] | |||
[[de:Monte-Carlo-Algorithmus]] | |||
[[es:Método de Monte Carlo]] | |||
[[fa:روش مونتکارلو]] | |||
[[fr:Méthode de Monte-Carlo]] | |||
[[ko:몬테카를로 방법]] | |||
[[hr:Monte Carlo simulacija]] | |||
[[id:Metode Monte Carlo]] | |||
[[it:Metodo Monte Carlo]] | |||
[[he:שיטת מונטה קרלו]] | |||
[[lv:Montekarlo metode]] | |||
[[hu:Monte Carlo-módszer]] | |||
[[nl:Monte-Carlosimulatie]] | |||
[[ja:モンテカルロ法]] | |||
[[no:Monte Carlo-metoden]] | |||
[[oc:Metòde de Montcarles]] | |||
[[pl:Metoda Monte Carlo]] | |||
[[pt:Método de Monte Carlo]] | |||
[[ru:Метод Монте-Карло]] | |||
[[simple:Monte Carlo algorithm]] | |||
[[sk:Metóda Monte Carlo]] | |||
[[su:Metoda Monte Carlo]] | |||
[[fi:Monte Carlo -simulaatio]] | |||
[[sv:Monte Carlo-metod]] | |||
[[tr:Monte Carlo benzetimi]] | |||
[[uk:Метод Монте-Карло]] | |||
[[ur:مونٹے کارلو تشبیہ]] | |||
[[vi:Phương pháp Monte Carlo]] | |||
[[zh:蒙特·卡罗方法]] | |||
--> |
Revision as of 07:32, 3 April 2011
Методите Монте Карло са клас изчислителни алгоритми, които се основават на повтарящи се случайни извадки за изчисление на резултатите. Тези методи често се използват в симулирането на физически и математически системи. Тъй като разчитат на многократните изчисления на случайни или псевдослучайни числа, тези методи са най-пригодни за изчисление с компютър и обичайно се ползват в случаи, когато е неприемливо или невъзможно изчисляването на точен резултат с детерминиран алгоритъм.[1]
Симулационните методи от вида Монте Карло са особено полезни при изучаването на системи с голям брой coupled степени на свобода, като флуиди, disordered materials, strongly coupled solids, и клетъчни структури (вж cellular Potts model). По-общо, методите Монте Карло намират приложение за моделиране на явления със значителна степен на несигурност на входа, каквито са пресмятанията на риска във финансите и бизнеса. Тези методи също са широко използвани в математиката: класически пример за употреба е оценяването на определени интеграли, и в частност многомерни интеграли с усложнени гранични условия. Когато се прилагат Монте Карло симулации в космическите изследвания и при търсенето на нефт и природен газ, преразходите на бюджета и просрочването на графика се предсказват рутинно по-добре посредством тези симулации, отколкото на база експертен опит и интуиция или алтернативни методи от софт компютинга.[2]
Терминът "метод Монте Карло" е лансиран през 1940-те години от физици, работещи над проекти за ядрени оръжия в Националната лаборатория в Лос Аламос.[3]
Същност на метода
There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern:
- Define a domain of possible inputs.
- Generate inputs randomly from the domain using a certain specified probability distribution.
- Perform a deterministic computation using the inputs.
- Aggregate the results of the individual computations into the final result.
For example, the value of [math]\displaystyle{ \pi }[/math] can be approximated using a Monte Carlo method:
- Draw a square on the ground, then inscribe a circle within it. From plane geometry, the ratio of the area of an inscribed circle to that of the surrounding square is [math]\displaystyle{ \pi/4 }[/math].
- Uniformly scatter some objects of uniform size throughout the square. For example, grains of rice or sand.
- Since the two areas are in the ratio [math]\displaystyle{ \pi/4 }[/math], the objects should fall in the areas in approximately the same ratio. Thus, counting the number of objects in the circle and dividing by the total number of objects in the square will yield an approximation for [math]\displaystyle{ \pi/4 }[/math].
- Multiplying the result by 4 will then yield an approximation for [math]\displaystyle{ \pi }[/math] itself.
Notice how the [math]\displaystyle{ \pi }[/math] approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of [math]\displaystyle{ \pi }[/math]. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of [math]\displaystyle{ \pi }[/math] will become more accurate both as the grains are dropped more uniformly and as more are dropped.
История
Enrico Fermi in the 1930s and Stanisław Ulam in 1946 first had the idea. Ulam later contacted John von Neumann to work on it.[4]
Physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus or how much energy the neutron was likely to give off following a collision, the problem could not be solved with analytical calculations. John von Neumann and Stanislaw Ulam suggested that the problem be solved by modeling the experiment on a computer using chance. Being secret, their work required a code name. Von Neumann chose the name "Monte Carlo". The name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.[1][5][6]
Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Sealy Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog (see Simulated annealing). Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread.
Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.
Приложения
As mentioned, Monte Carlo simulation methods are especially useful for modeling phenomena with significant uncertainty in inputs and in studying systems with a large number of coupled degrees of freedom. Specific areas of application include:
Приложения във физическите науки
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms. The Monte Carlo method is widely used in statistical physics, particularly Monte Carlo molecular modeling as an alternative for computational molecular dynamics as well as to compute statistical field theories of simple particle and polymer models [7]; see Monte Carlo method in statistical physics. In experimental particle physics, these methods are used for designing detectors, understanding their behavior and comparing experimental data to theory, or on vastly large scale of the galaxy modelling.[8]
Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting operations.
Design and visuals
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
Приложения във финансите и бизнеса
Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at a business unit or corporate level, or to evaluate financial derivatives. Monte Carlo methods used in these cases allow the construction of stochastic or probabilistic financial models as opposed to the traditional static and deterministic models, thereby enhancing the treatment of uncertainty in the calculation. For use in the insurance industry, see stochastic modelling.
Телекомуникации
When planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
Игри
Monte Carlo methods have recently been applied in game playing related artificial intelligence theory. Most notably the game of Go and Battleship and have seen remarkably successful Monte Carlo algorithm based computer players. One of the main problems that this approach has in game playing is that it sometimes misses an isolated, very good move. These approaches are often strong strategically but weak tactically, as tactical decisions tend to rely on a small number of crucial moves which are easily missed by the randomly searching Monte Carlo algorithm.
Monte Carlo simulation versus “what if” scenarios
The opposite of Monte Carlo simulation might be considered deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Various combinations of each input variable are manually chosen (such as best case, worst case, and most likely case), and the results recorded for each so-called “what if” scenario.[9]
By contrast, Monte Carlo simulation considers random sampling of probability distribution functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring.[10]
For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis. This is because the “what if” analysis gives equal weight to all scenarios.[11]
For further discussion, see quantifying uncertainty under corporate finance.
Приложение в математиката
In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers which obeys some property or properties. The method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Интегриране
Template:Main Deterministic methods of numerical integration usually operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of vectors, deterministic quadrature methods can be very inefficient. To numerically integrate a function of a two-dimensional vector, equally spaced grid points over a two-dimensional surface are required. For instance a 10x10 grid requires 100 points. If the vector has 100 dimensions, the same spacing on the grid would require 10100 points—far too many to be computed. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom. (See Curse of dimensionality.)
Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the law of large numbers, this method will display [math]\displaystyle{ 1/\sqrt{N} }[/math] convergence—i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions.
A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below.
A similar approach involves using low-discrepancy sequences instead—the quasi-Monte Carlo method. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly.
Методи за интегриране
- Direct sampling methods
- Random walk Monte Carlo including Markov chains
- Gibbs sampling
Оптимизация
Most Monte Carlo optimization methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient.
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. These problems use functions of some often large-dimensional vector that are to be minimized (or maximized). Many problems can be phrased in this way: for example a computer chess program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The traveling salesman problem is another optimization problem. There are also applications to engineering design, such as multidisciplinary design optimization.
Методи за оптимизация
- Evolution strategy
- Genetic algorithms
- Parallel tempering
- Simulated annealing
- Stochastic optimization
- Stochastic tunneling
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995),[12] or Tarantola (2005).[13]
Изчислителна математика
Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
Монте Карло и случайните числа
Interestingly, Monte Carlo simulation methods do not always require truly random numbers to be useful — while for some applications, such as primality testing, unpredictability is vital (see Davenport (1995)).[14] Many of the most useful techniques use deterministic, pseudo-random sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones.
Вижте също
- Auxiliary field Monte Carlo
- Bootstrapping (statistics)
- Demon algorithm
- Еволюционно изчисление
- FERMIAC
- Верига на Марков
- Молекулярна динамика
- Monte Carlo option model
- Monte Carlo integration
- Quasi-Monte Carlo method
- Генератор на случайни числа
- Случайност
- Resampling (statistics)
Източници
- ↑ 1.0 1.1 Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007
- ↑ Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It", John Wiley & Sons, 2009
- ↑ Template:Cite journal
- ↑ http://people.cs.ubc.ca/~nando/papers/mlintro.pdf
- ↑ Charles Grinstead & J. Laurie Snell "Introduction to Probability" pp. 10-11, American Mathematical Society, 1997
- ↑ H.L. Anderson, "Metropolis, Monte Carlo and the MANIAC," Los Alamos Science, no. 14, pp. 96-108, 1986.
- ↑ Template:Cite journal
- ↑ H. T. MacGillivray, R. J. Dodd, Monte-Carlo simulations of galaxy systems, Astrophysics and Space Science, Volume 86, Number 2 / September, 1982, Springer Netherlands [1]
- ↑ David Vose: “Risk Analysis, A Quantitative Guide,” Second Edition, p. 13, John Wiley & Sons, 2000.
- ↑ Ibid, p. 16
- ↑ Ibid, p. 17, showing graph
- ↑ http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/Papers_PDF/MonteCarlo_latex.pdf
- ↑ http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html
- ↑ Template:Cite web
- Template:Cite journal
- Template:Cite journal
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite journal
- Template:Cite book
Външни препратки
- Overview and reference list, Mathworld
- Introduction to Monte Carlo Methods, Computational Science Education Project
- Overview of formulas used in Monte Carlo simulation, the Quant Equation Archive, at sitmo.com
- The Basics of Monte Carlo Simulations, University of Nebraska-Lincoln
- Introduction to Monte Carlo simulation (for Excel), Wayne L. Winston
- Monte Carlo Methods - Overview and Concept, brighton-webs.co.uk
- Molecular Monte Carlo Intro, Cooper Union
- Monte Carlo techniques applied in physics
- MonteCarlo Simulation in Finance, global-derivatives.com
- Approximation of π with the Monte Carlo Method
- Risk Analysis in Investment Appraisal, The Application of Monte Carlo Methodology in Project Appraisal, Savvakis C. Savvides
- Probabilistic Assessment of Structures using the Monte Carlo method, Wikiuniversity paper for students of Structural Engineering
- A very intuitive and comprehensive introduction to Quasi-Monte Carlo methods
- Pricing using Monte Carlo simulation, a practical example, Prof. Giancarlo Vercellino