Ai used to stimulate fluid dynamics

ARTIFICIAL INTELLIGENCE
         We generally call artificial intelligence as a machine intelligence it just think and work like a human.
 FLUID DYNAMICS
             fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids—liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion).
                .               Scientists used neural network, which is a processing unit that works similar to human brain,to stimulate Dynamics of fluids.The neural network not only learned how those fluids behave,but was able to simulate them much faster than traditional algorithms.
THE TRADITIONAL APPROACH (old approach). 
                 In order to simulate the shape of the fluid, the processor has to compute a lot of complicated algorithms like ex: Euler equations.
.         The above equations describe how water, any other fluid (including smoke) behave in a given situation. This is pretty complicated one and needs  lots of time to solve this . using modern GPU(graphics card) helps a lot, but it is effective only for small,less complex scenarios.
THE NEURAL NETWORK APPROACH
           So basically  instead of solving a complications,we use neural network.The networks learns what happens with the fluid in given situation and after the learning period, it can determine the fluids behavior in any situation-not just those from learning session.If we would like to change the algorithm, the traditional algorithm be  like with the new way,we create a neural network and teach it how the fluid behaves on large number of examples.   Than the network tries to find the pattern the (the parameters)to match all the equations.After the learning is done we can start to ask question;)
(please keep in the mind that this is a huge simplification in order to show the overall idea.)
            Now, the amazing part is that performing computations on (trained) neural network is much faster than computing traditional algorithms.where as the old method needs at least several minutes, the AI can give result in milliseconds.
             This is where the similarity of neural network to human brain comes in - if you drop a glass of water, you don't take pencil and paper and start to solve complicated equations.you just know what's gonna happen-just like artificial neural network
EXTREME FLUID DYNAMICS WITH ARTIFICIAL INTELLIGENCE
          physics-constrained data-driven prediction of extreme and rare events in Moehlis-Faisst-Eckhart turbulence With reservoir computing and recurrent neural networks.part of the project of Nguyen Anh khoa Doan     Heat release rate of turbulent MILD combustion solved by direct numerical simulation.The challenge is to time-and-space accurately predict the occurance of auto-ignition kernels.part of the project of Nguyen Anh Khoa Doan.
       We develop and apply a variety of computational techniques,based on artificial intelligence, Machine learning,and wheather forecast methods,for accurate physical description, prediction and control of rare and extreme events.In lack of a full physical description, existing database and experimental data will be used to develop hybrid predictive tools, which will be physics-based and data driven.The board objectives are:
1.Modelling combustion systems with artificial intelligence
            Combustion problems are governed by physical laws.Thus,we will continue to make use of mathematical models based on conservation laws.However to minimize the uncertainties of our models,we purpose to use data from existing experimental and Computational databases that are either publicly available or are produced with in a group.In order for the model to become more accurate given the information from the external data,we will use artificial intelligence and machine learning algorithms.In particular,spectral proper orthogonal decomposition (in collaboration with prof.oliver Schmidt,UCSD), clustering (in collaboration with prof.peter schmid, Imperial college London), Bayesian classification and manifold learning.The deliverable will be a software that is able to
1.self-reprogram(with supervision)any time that external training data/information/knowledge is inputted,and 
2.recognize if there are unacceptably large  uncertainties.hence,select another appropriate model in real time.
2.prediction and prevention of extreme events 
           Once a robust model is selected,the parameters will be estimated through data assimilation and online machine learning.Data assimilation is a technique that is used in wheather forecasting to update the forecase from numerical simulations with data from observatories, which is sparse in space and time.We purpose to use data assimilation based on stochastic updating and lagrangian optimization to
1.quantify the least-biased reacting flow parameters,
2.evaluate the degree of confidence (uncertainty) of the parameters and predictions.
MACHINE LEARNING USED IN COMPUTATIONAL FLUID DYNAMICS
               Replacing a complex calculation in the Lattice Boltzmann method with a machine learning model to improve the performance of the simulator named palabos-(parallel Lattice Boltzmann solver). 
           Lattice Boltzmann method (LBM) is a parallel algorithm in computational fluid Dynamics(CFD)for simulating single-phase and multi-phase fluid flows,It is instrumental in modeling complicated boundary conditions and multi-phase interfaces.    
            It models the fluid as fictive particles that perform consecutive collision and streaming steps over a discrete lattice mesh.The commonly used lattice denomination is the DnQm,which means that the Lattice has n spatial dimensions and m discrete speeds.In each time step, particles at any node can move only to their nearest neighboring sites, with one of the discrete speeds.once the particles have moved to one of their neighboring sites,they will collide with other particles that arrive on the same site at the same time.          particle moment in LBM
             In direct Numerical simulation-a simulation in Computational fluid dynamics-numerically solves the Navier-stokes equations without any turbulence model.

         Large-eddy Simulations (LES) is a mathematical model in computational fluid dynamics used for simulating turbulent flows. The size of the computational domain should be at least an order of magnitude higher than the scales characterizing the turbulence energy while at the same time the computational mesh should be fine enough to resolve the smallest dynamically significant length-scale for accurate simulation.

         If all the eddies — the grid should be finer than the smallest turbulent eddy, which is essentially a circular current of water — are resolved accurately, by directly solving the Navier-Stokes equation using a very fine mesh, it results in high computational costs. To overcome these computational costs, in LES only the large scale motions (large eddies) are computed directly, while the small scale (sub-grid scale (SGS)) motions are modeled.

           Concluding, LBM tells us that we can imagine a fluid as fictive particles in space and time, DNS tells us how to simulate the behavior of each particle — given the physical conditions of the system—at each time step and LES makes it possible to do such things for turbulent flows when the range of scales vary a lot by approximating the sub-grid scale eddies. 

Deep Learning for Steady-State Fluid Flow Prediction in the Advania Data Centers Cloud

In a recent case study, researchers applied deep learning to the complex task of computational fluid dynamics (CFD) simulations. Solving fluid flow problems using CFD demands not only extensive compute resources, but also time for running long simulations. Artificial neural networks (ANNs) can learn complex dependencies between high-dimensional variables, which makes them an appealing technology for researchers who take a data-driven approach to CFD.


In this case study, researchers applied an ANN to predict fluid flow, given only the shape of the object to be simulated. The goal of the study was to use ANN to solve fluid flow problems with significantly decreased time to solution (by the order of 1,000 times) , while maintaining the accuracy of a traditional CFD solver.

The figure above illustrates the difference between the ground truth flow field (left image) and the predicted flow field (right image) for one exemplary simulation sample after 300,000 training steps. 

Creating a large number of simulation samples is paramount to let the ANN learn the dependencies between a simulated design and the flow field around it. Cloud computing provides an excellent source for the additional resources needed to create these simulation samples--in a fraction of the time the samples could be created on a state-of-the-art desktop workstation. German-based Renumics GmbH partnered with UberCloud to explore whether using an UberCloud software container in the cloud to create simulation samples would improve the overall accuracy of the ANN.

 Researchers used the open-source CFD code OpenFOam to perform the CFD simulations. Automatically creating the simulation samples took four steps:.  

  1. Random two-dimensional shapes were created. They had to be sufficiently diverse to let the neural network learn the dependencies between different kinds of shapes and their surrounding flow fields.
  2. Shapes were meshed and added to an OpenFOAM simulation case template. This template was simulated using the steady-state simpleFOAM.
  3. Simulation results were post-processed using open-source visualization tool ParaView. The flow fields were resampled on a rectangular regular grid to simplify information processing for the neural net.
  4. Both the simulated design and the flow fields were fed into the neural network’s input queue. After training, the neural network was able to infer a flow field merely from seeing the to-be-simulated design.    The figure above illustrates the steps for building the deep-learning workflow. 
  5.  The research team proved a mantra among machine learning engineers: The more data, the better:
  6. The proposed metrics for measuring the accuracy of the neural network predictions exhibited better accuracy for the larger number of samples.
  7. Using a large number of high-performance cloud computing nodes (working in parallel on the many samples) effectively compensated for the overhead required to create high volumes of additional samples.
  8. Compared to a state-of-the-art desktop workstation, the cloud-based approach was six times faster, creating tens of thousands of necessary samples in hours instead of days.
  9.       The team also concluded that training more complex models (e.g., for transient 3D flow models) will require much more data; software platforms for training data generation and management, as well as flexible compute infrastructure, will become increasingly important.

Comments

Post a Comment

Popular posts from this blog

Financial Technology in India