How to Read .dat Files in Python

Introduction on reading datfiles

This page provides guidelines and examples to read in and procedure MPI-AMRVAC .dat files with Python. All required packages and instructions on how to gear up the tools folder are described in Setting up the python tools. The datfile tools tin can be plant in the binder $AMRVAC_DIR/tools/python/datfiles.

Please note: the legacy Python tools practice not back up polar/cylindrical/spherical datasets, and/or stretched/staggered grids. Nosotros (highly) recommend to use yt (run into the docs) for datfile analysis with Python.

Reading datasets

Obtaining dataset information

All functionality is contained in the file amrvac_reader.py, present in the datfiles/reading subdirectory. This class contains various instances and methods linking to other classes and methods in different subdirectories, keeping usage plain and simple with just i single import. To import the reader into your script (assuming y'all installed amrvac_pytools):

import amrvac_pytools equally apt        

As an example, nosotros will use the file KH0015.dat, which is the 2d MHD Cartesian Kelvin-Helmholtz trouble from the tests folder. In guild to read in this file, it is sufficient to blazon

ds = apt.load_datfile('KH0015.dat')        

It is necessary to stress that this control loads the data of the .dat file, non the bodily data itself. This is particularly useful to inspect big datasets or time serial, without having to load the information into memory every time. We can inspect this dataset farther past using ds.get_info(), which prints

[INFO] Current file      : KH0015.dat [INFO] Datfile version   : iv [INFO] Current fourth dimension      : vii.v [INFO] Physics type      : mhd [INFO] Boundaries        : [0. 0.] -> [4. four.] [INFO] Max AMR level     : 3 [INFO] Block size        : [12 12] [INFO] Number of blocks  : 172 ---------------------------------------- Known variables: ['rho', 'm1', 'm2', 'e', 'b1', 'b2', 'v1', 'v2', 'p', 'T']        

Note that both the conserved and primitive variables are known and bachelor for information analysis. The current time of the dataset tin can be retrieved past ds.get_time(), the concrete boundaries can be obtained by ds.get_bounds(). The latter returns a list in the class [[x1, x2], [y1, y2]], which volition be extended by x3, y3 and [z1, z2, z3] for a 3D dataset, with similar reasoning for a 1D dataset. The method ds.get_extrema(var) returns the minimum and maximum value of the required variable var in the entire dataset, where var is i of the known variables above.

Working with the data

In order to load the entire dataset into retentiveness, one can use the method

ad = ds.load_all_data()        

Near likely the dataset volition comprise multiple levels of AMR, such that loading this into i single Numpy array is not trivial. Instead, a regridding is performed to the finest cake level over the grid, and all blocks are regridded to this level using flat interpolation. This performance is parallelised using Pythons multiprocessing module, and volition only piece of work in Python 3. The optional statement nbprocs specifies the number of processors to utilize when performing the regridding, this will default to two less than the number available. The optional statement regriddir specifies the directory where to relieve the regridded data every bit .npy (Numpy) files. The default directory is regridded_files, which volition be created in the same binder as the current script if it is not present. Subsequently, the reader volition load this regridded data if the script is executed again.

The data tin can then be accessed by calling the corresponding name in the advert lexicon, for example:

rho = advertising['rho']        

or any other of the known variables (meet above). This works in 1D, 2D and 3D, however, for big 3D datasets the regridding tin accept quite some fourth dimension.

Plotting datasets

The datfile tools include support for plotting both 1D and 2D datasets, in two variants. The first option is directly plotting the regridded data, which requires that ds.load_all_data() has been called. Plotting is and then washed via

ds.rgplot(advertizing['rho'])        

where nosotros passed the Numpy array containing the density as an example.

The 2nd option is much more user-friendly, as it supports plotting the AMR data straight with an optional mesh overlay. This is done through

ds.amrplot('rho')        

Note the departure hither between rgplot and amrplot, in calling the latter method we simply supply the name of the variable to plot. Finally, calling ds.show() shows all created figures in split windows.

Example

The post-obit code snippet produces the figure shown below.

ds = apt.load_datfile('KH0015.dat') p = ds.amrplot('rho', draw_mesh=True, mesh_linewidth=0.v, mesh_color='white',                mesh_linestyle='solid', mesh_opacity=0.8) p.ax.set_title("Density plot") p.fig.tight_layout() ds.evidence()        

When assigning the plot to the variable p, we can retrieve the matplotlib figure and centrality instances by calling p.fig and p.ax, respectively, transferring total control of the figure to the user. The colorbar tin can be retrieved by p.colorbar. Plotting on a logarithmic calibration is supported for both rgplot and amrplot, by supplying the argument logscale=Truthful. Every bit shown below, setting draw_mesh=Truthful plots the overlaying AMR mesh. The linewidth, color, linestyle and opacity of the mesh are also specified, and can take in default matplotlib values. The colormap can exist chosen by supplying the cmap statement, the default is jet.

Both rgplot and amrplot create new matplotlib figure and axis instances that are used for plotting. Nevertheless, both methods accept a effigy and axis example as actress argument, which will then be used instead. This is useful in for instance plotting multiple snapshots on a single figure using subplots, like the instance beneath shows.

import matplotlib.pyplot every bit plt  fig, ax = plt.subplots(1, 2, figsize=(ten, 6))  advertising = ds.load_all_data() p1 = ds.amrplot('rho', fig=fig, ax=ax[0], draw_mesh=True, mesh_color='white',                   mesh_opacity=0.4, mesh_linewidth=1) p2 = ds.rgplot(advert['p'], fig=fig, ax=ax[one], cmap='Greens')  p1.colorbar.set_label("density") p2.colorbar.set_label("pressure") p1.ax.set_title("density plot with mesh overlay") p2.ax.set_title("pressure plot")  fig.tight_layout() plt.show()        

Synthetic views

The python tools besides include a ray-tracing algorithm along one of the coordinate axes to create synthetic views. Currently two types are supported, the get-go one being a synthetic H-blastoff view based on a method described in Heinzel et al. (2015). Basically, based on tables given in said paper the degree of ionisation is calculated using the local pressure and temperatures, from which the opacity can be calculated and somewhen the H-alpha intensity by integrating forth the line of sight. The second type is a synthetic view of the Faraday rotation measure, which is simply possible for MHD datasets every bit information technology uses the component of the magnetic field parallel to the line of sight. Creating these views tin can be washed as follows:

ds.halpha(line_of_sight='x') ds.faraday(line_of_sight='x')        

In this example, the line of sight is taken along the x-axis, which is the default value. The colormap can be specified using the cmap keyword argument, plotting on a logarithmic calibration is done by supplying logscale=True. Additionally, just like earlier, the matplotlib figure, centrality and colorbar instances tin be obtained past calling for example Ha = ds.halpha(), followed by Ha.fig, Ha.ax or Ha.colorbar.

Please note that the creation of these synthetic views can take some fourth dimension for big 3D datasets with multiple levels of AMR, every bit the routines have to integrate each block and merge the results into one single prototype. As a final comment, information technology is important that the units are correctly specified to ensure a consistent calculation of the synthetic views, come across Units beneath.

Units

Everything described higher up (except for the constructed views) is calculated using normalised code units. In social club to retrieve the correct physical values of the variables in the dataset, it is important (and good practice) to correctly define the unit normalisations. The default unit system is cgs, which can be switched to si via

ds.units.cgs=False        

If cgs units are required, this can be omitted. Setting the normalisations is quite straightforward, and is done through

ds.units.set_units(unit_length=1.0e9, unit_numberdensity=1.0e9, unit_temperature=1.0e6)        

using for case typical solar values in cgs. The unit_length and unit_numberdensity must always be specified, the tertiary argument tin either exist unit_temperature or unit_velocity. The code then automagically calculates all other normalisations, which can be accessed through

ds.units.unit_length ds.units.unit_numberdensity ds.units.unit_temperature ds.units.unit_density ds.units.unit_magneticfield ds.units.unit_time        

and so on. Some astrophysical constants are also included, attainable in a like manner past doing ds.units.m_p, which retrieves the proton mass in either cgs or si units, depending on the unit organization used. Known constants are R, m_e, m_p, k_B, ec, c and Rsun for the gas constant, electron and proton mass, Boltzmann constant, electrical charge, speed of light and solar radius, respectively. Converting a variable back to concrete units can and then exist washed using

pressure = ad['p'] * ds.units.unit_pressure current_time = ds.get_time() * ds.units.unit_time        

and then on. If no unit normalisations are specified, the default values of MPI-AMRVAC will be used.

maloneherst1972.blogspot.com

Source: http://amrvac.org/md_doc_python_datfiles.html

0 Response to "How to Read .dat Files in Python"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel