Genplot Enhancements Log File
LIST OF ENHANCEMENTS IN ANSI-C DEVELOPMENT
Windows users, the new MESSAGEBOX command may be very useful.
See MessageBox -? for further information and help.
Major changes to the string handling code was implemented in Fall
2007.
===========================================================================
Prty 0: Nice upgrade feature
1: Minor bug but not severe, or high priority feature upgrade
2: Major annoying bug but not computationally wrong
3: Error in results -- must be fixed now
===========================================================================
MOT 8/7/2013 - Added ability to change color of axes (just one)
AXCTRL -COLOR
Will cause the specified axis to be drawn in another color. The settings
of $AXCOLR[] will override these settings, so can force the title to be
in black if desired (or tick marks)
axctrl left -color 2
let $axcolr[4] = 1
axis
Note that all four axes are independent with respect to color. Although
the right normally follows the left characteristics (unless on), this will
not happen with color.
MOT 8/1/2013 - New smoothing capability
It isn't often that I'm surprised by GENPLOT anymore. But smoothing of
square waves turned out to look really ugly. Learned that the FFT_SMOOTH
is really a convolution with an inverted bell rather than with a true
Gaussian. So had to implement the Gaussian smooth to deal with curves
containing very high spatial frequency "signals". Expanded SMOOTH with
options to handle, as well as new commands
SMOOTH -FFT - Smooth by the inverted Bell over width
SMOOTH -GAussian - Smooth by true Gaussian over width
GAUSS_SMOOTH - Smooth by true Gaussian over width
SMOOTH_GAUSSIAN - Smooth by true Gaussian over width
FFT_SMOOTH - Existing function using inverted bell
SMOOTH_FFT - Existing function using inverted bell
The effect of is the same in both cases, reducing Gaussian noise
by a factor of approximate 1/sqrt(pts).
MOT 7/10/2013 - Statistics on probability distribution functions
The new functions
@pdf_ave(cv [,xl,xh]) @pdf_avg(cv [,xl,xh])
@pdf_average(cv [,xl,xh]) @pdf_mean(cv [,xl,xh])
@pdf_median(cv [,xl,xh])
@pdf_rms(cv [,xl,xh])
@pdf_std(cv [,xl,xh]) @pdf_sdev(cv [,xl,xh])
@pdf_stdev(cv [,xl,xh]) @pdf_stddev(cv [,xl,xh])
@pdf_variance(cv [,xl,xh]) @pdf_var(cv [,xl,xh])
@pdf_skew(cv [,xl,xh]) @pdf_skewness(cv [,xl,xh])
@pdf_kurt(cv [,xl,xh]) @pdf_kurtosis(cv [,xl,xh])
treat a curve as a probability distribution histogram. The moments
of this pdf (as an integral) are computed. If the optional range
is specified, the curve is assumed to be sorted in X.
MOT 7/10/2013 - Documentation of behavior
The statistical commands @mean, @skew, etc. allow a curve as well as
an array as the argument. If a curve is given, the Y argument is
treated as the array to scan. The optional range arguments, which
are integer indices into the array, become X values for the array.
eval @mean($plot, -9.5, 9.5)
returns the average value of the default Y array but only including
points where -9.5 In gvcalc(), prints the pseudo-op codes from calculations
0x04 -> In gvcalc(), also prints the stack and arg values
MOT 1/1/2013 - Fixed problem with wait timeout.
MOT 12/26/2012 - Added all the ctype.h functions for determining the type
of character.
isalnum(c), isalpha(c), iscntrl(c), isdigit(c), isgraph(c),
islower(c), isprint(c), ispunct(c), isspace(c), isupper(c),
isxdigit(c)
and the two to convert single characters upper and lower case
tolower(c)
toupper(c)
where c is any integer value (converted to nearest integer).
Return value is system dependent, but non-zero if value is
of the specified class. Behavior of these functions with values
of c outside the range 1 < c < 127 is also system dependent.
MOT 12/4/2012 - Added "DARROW" to draw a double arrow in annotage
(you're welcome)
MOT 12/4/2012 - Added "align" as an alias for "orgmode" in Annotate.
MOT 11/26/2012 - Added options -DEBUG and -DETAIL (synonymous) to the
LISTVAR function. Also added -? to give help. The debug mode
prints out internal information to help debug operation.
MOT 11/15/2012 - Added rnd_norm() as a simple way to get a normal
distribution. Changed rnd_normal() so that it requires giving
two parameters (mean and stdev)
Ian suggests that GVPARSE should be revised so that there is a way
to allow either zero or all of the parameters in a function defn.
MOT 11/4/2012 - Added @E(x) and @cov(x,y) as function in analog to @var.
@E is expectation value (mean), @cov is covariance
MOT 10/31/2012 - Added -HBARGRAPH to the plot options. Horizontal bar
graph.
MOT 10/28/2012 - Added normal() as alias for gauss, and new functions for
chi squared distributions.
normal(x,mu,sigma)
chisqr(x,nu) /* Complex numbers use only real part
chi2(x,nu) /* Complex numbers use only real part
This function has the expected means and distributions
MOT 10/24/2012 - Added the @MAD(array) and MAD(x1,x2,...,xn) functions to
return the MAD of a sample. The MAD is the "Median Absolute Deviation"
and is an analog to the standard deviation using median concepts. Also
fixed median() function to properly average when the number of items was
even.
For complex numbers, the sorting for the median is based on the real
value first, and if the real is equal, on the imaginary part. However,
for the MAD, the magnitude of the argument |z-median()| is used. It
really doesn't make sense to do median/MAD of complex numbers, but do
so nonetheless.
MOT 10/17/2012 - Added a "NHISTOGRAM" and "CDF" transformations to the
transformation function. The NHistogram is a normalized histogram
which automatically sets the -densify and -normalize options. This
makes it easier to work with probability function estimation from data.
The CDF transformation generates the Cummulative Distribution Function
which is the integral of the probability distribution. F(a) = P(x<=a).
Again, a function useful for statistical analysis.
MOT 10/6/2012 - Added a "kernal" estimation for the probability density
function capability.
transform [y | x] kernal [-options]
This is a "idealized" histogram function that more smoothly estimates
the probability distribution function for the samples in Y. It gives
a smooth curve of at least 500 points which mimics the histogram
transf y histogram -density -normalize
but with continuous curves. Default is use of the triangular kernal
function. Formally, the kernal method spreads each measured point out
a the kernal function (think triangle) and then just sums up the values.
This gives big peaks at values with lots of points and small peaks
where there were no results.
MOT 10/6/2012 - Changed histogram defaults. In past, the default was to
generate 100 bins for a histogram. Have modified that to use a value
that scales more reasonably with the number of points to bin. Value now
is
1.2 * @sdev(y) / npt^(1/3)
From stats texts, the recommended constant is 3.49. However, I find this
creates far too few bins to see useful trends, so my default is 1.2.
A fixed number of bins can still be specified using the -NX option.
MOT 9/18/2012 - Added Mersenne Twister algorithm for random number
generation. This is a the "best" current PRNG availalbe. Implemented
also random numbers from common distributions using the Mersenne Twister
random() - Mersenne Twister random on [0,1]. Same as rnd_drand()
rnd_seed(iseed) - Initialize Mersenne Twister with iseed (default=time())
rnd_lrand() - Random integer value (32 bits)
rnd_drand() - Random double value on [0,1])
rnd_iuniform(min, max) - Uniform integers on interval [min,max]
rnd_uniform(min, max) - Uniform doubles on interval [min,max]
rnd_exponential(mean) - Exponentially distributed random numbers with specified mean
rnd_erlang(k, mean) - k-Erlang distributed random numbers with int p, double mean
rnd_weibull(shape, scale) - Weibull distributed random numbers with shape/scale parameters
rnd_normal(mean, sigma) - Normally distribution random numbers with mean/sigma
rnd_lognormal(mean, sigma) - Lognormal distributed random numbers where ln(x) has normal mean/sigma
rnd_triangle(min,max,mode) - Triangular distributed random numbers on [min,max] with mode
The function rnd() is now mapped to rnd_drand() as the best implementaiton.
MOT 2/6/2012 - Added ability to set surface elements. The form is
let s1[x,y] = value
where x and y are the coordinates of the point. Will set the value into
the closest element of the array given the specified x and y points.
This is guarenteed to be safe for all values of x and y ... being limited
to the actual matrix. x and y are compared against s1:x and s1:y and need
not be integers. If s1:x or s1:y are not strictly sorted (either ascending
or descending), the behavior is undefined but safe.
Because of the , in the format s1[x,y], the use of "let" is required to
set the element value. Implicit let is not available currently.
This command now also will force me to create a new variable class of
matrices, including potentially n-dimensional as well as 2-dimensional.
This function allows working with matrices, but the (x,y) format is
backwards to the (row,column) typically expected for matrix manipulation.
MOT 2/6/2012 - WARNING (mostly to self): The change above required an order
switch in math parsing of expressions to allow a )]} character to be a
delimiter in a parsing string. Previously, the check for open or close
parenthesis was done before checking the delimiter list, now the delimiters
are checked first. Should have no effect except to enable the s1[-7,3] to
be properly parsed, but always nervous.
MOT 2/3/2012 - Extended function evaluator to be able to determine values
from a surface. Given s1 as a surface, s1(x,y) returns the value of
the surface at that point, interpolating as necessary. If the surface
x and y values are left as indices, this can be used to address the
surface as a matrix.
alloc s1 surface 20 30
eval s1[0,0]
eval s1[18,10]
This is equivalent to the @zinterp(s1,x,y) function which is now
obsolete (deprecated). From the @zinterp original implementation:
Attempts to interpolate between points on a surface. Uses constant
extrapolation beyond the edges of the surface. Attempts to properly
handle X,Y scales on surfaces, but don't push too hard (ie. unsorted).
Basically gives an analytical way to determine values off of a
surface for complex functions.
MOT 1/14/2012 - Added help to USER and LOAD commands. Forgot how to use
the LOAD command.
MOT 11/11/2011 - Removed "fit power" as an alias for "fit poly". Will ultimtely
add a new fit capability for true power law relationship (y = ax^m).
MOT 11/11/2011 - Added help to FIT and some of the FIT commands (for students at
Cornell)
MOT 11/11/2011 - Fixed problem in WordPos() function. Addition of ability
to scan for phrases as well as single words had unintended consequences.
Now fixed.
MOT 11/11/2011 - Version update to 2.11. (Wanted 11/11/11 so a few days early)
Added two new variables to be able to track the version
$version - real number representation of the version
$version:major - integer major version (2)
$version:minor - integer minor version (11)
Allows for version testing in the case of incompatibilities.
MOT 11/3/2011 - Added "fit constant [-sigma ... -range ...]" function to
the fitting list. Really implements the same as @wave(x,s) but in a
way that is perhaps more obvious. Same options as fit linear.
MOT 8/15/2011 - Added high precision timer() function. This function under
Windows has sub-microsecond resolution for timing events. Under
other operating systems, currently is equivalent to time() function,
with optional reset.
timer([reset]) - optional BOOL reset parameter
timer(1) ==> reset timer
timer() ==> get time since last reset
timer(0) ==> get time since last reset
MOT 7/15/2011 - Modified "alloc string ". If the length is
specified as non-zero, then a string of that length will be created
and filled with blanks (plus one char for the terminating \0 null).
If the length is speciifed as zero (or negative), the variable name
is defined but the string is NULL (not empty, but invalid). Any
subsequent "let = " will properly define the variable. In all
cases, the string will always expand to the required size on a let
command. In previous versions, all values of the length were ignored
and the string was not initialzied on an "alloc" command. The only
impact is that the string may be used immediately in expressions if
allocated as a true blank string, while it will give an error if left
as a NULL pointer. The default value for an "alloc" command is 1.
MOT 7/15/2011 - Tired of normalizing the Gaussian when I want to know the
peak. So created new function
gaussn(x,x0,dx)
for normalized gaussian to a peak of 1.0. Just removes the 1/sqrt(2*pi)sigma
normalization factor in front of the Gaussian distribution.
MOT 7/15/2011 - Tired of having to count lines in data files with text marking
the data. Added options -begin and -end to Genplot read
command to allow read to being past first line containing the text, and
terminate on the first line containing the end text.
read 110712_171334.dat -begin "[BEGIN_DATA]" -end "[END_DATA]" -col 0 1
The comparison ignores whitespace at the start of the line, and is
case insensitive. Must be able to block data with specific comments such
as above to use. Otherwise, go back to counting and use the -rows or
-lines option.
This may be used with the -rows option. Row numbering starts at the first
line past the -begin line ... essentially lines prior to (and
including) the identified text line are ignored. Reading terminates on
the first line containing the end text.
Default is to not ignore any text. And obviously this has no effect on
any non-ASCII read.
MOT 7/15/2011 - Added -lines synonym for -rows in GENPLOT read command
MOT 7/15/2011 - Everything else in GENPLOT is 0 index based. Change reading
AVI files so the same. Frame 0 is now the first one. So as not to break
old macros (but still may), code allows the last frame to be referred to
as $FrameCount-1 (correct) or $FrameCount (incorrect but previously used).
MOT 7/7/2011 - The functions magn(), imag(), real(), conj() and arg() now force
the function evaluator into complex calculation mode.
MOT 6/21/2011 - Modifications to REXX. During documentation of the
functions, a number of *corrections* and enhancements have been made.
ERROR in implementation fixed:
lastpos(needle, haystack, start) - start was handled wrong. It
appears that it should be the starting position within the
string of the backward search. It will be 1 based, with 1
being essentially strcmp() (only search starting at the first
character) and strlen(haystack) being the default (start
search at the last character in the string). However,
returns a 0 based position in the string.
Added/changed - all functions of REXX that make sense are fully implemented.
all share issue of 0-based counting in GENPLOT versus 1 in other REXX
functions insert and xrange have specific deviations from standard behavior
REXX string functions
abbrev(s1,s2,n) center(str,n [,c1]]) centre(str,n [,c1])
compare(s1,s2 [,pad]) copies(str,n)
delstr(str,n[,len]) delword(str,n[,len]) index(s1,s2 [,n])
insert(str,new[,posn[,len[,pad]]]) justify(str,len[,pad]) lastpos(s1,s2 [,n]
left(str,n [,c1]]) length(str) overlay(str,new[,posn[,len[,pad]]])
pos(s1,s2 [,n]) reverse(str) right(str,n [,c1]])
space(str[,n[,pad]]) strip(str [,mode [,chr]]) substr(str,n[,len[,pad]])
subword(str,n [,len]) translate(str[,new,old[,pad]]) word(str,n)
wordindex(str, n) wordlength(str, n) wordpos(s1,s2 [,n])
words(str) verify(s1,ref [,mode [,start]]) xrange(istart, iend)
These functions are also now specifically listed in the "eval -list" command. eval -detail REXX
will give detailed usage.
substr() is widely used and its behavior now conforms to REXX. In particular, the third parameter
for the length of the substring should normally be unused. If given, the resulting string will
be exactly that size long, padded with (default) spaces.
MOT 6/21/2011 - Added code to handle special cases of stdin, stdout and
stderr in the file handling routines. These can now be used instead
opening a specific file pointer.
fgets(stdin)
will read from the keyboard (no echo) returning on the . Likewise
fputs("line 1: ...", stdout)
will print the string to the console with no additional formatting.
MOT 6/14/2011 - Added the function printf() synonymous with fprintf() but
outputs directly to the console. Exact same format and same required
use of control characters for newline.
printf("format", var1, var2, expr1, expr2)
MOT 6/14/2011 - CHANGE IN BEHAVIOR OF FPUTC and FPUTS
The order of arguments for these functions is now consistent with the
C language equivalents. The stream pointer comes last rather than first.
fputc(ichar, funit)
fputs(text, funit)
Since adding more flexibility in argument handling, this was easy to do
and makes more sense than being inconsistent with the existing language.
Apologies to the few that it will impact.
MOT 6/14/2011 - CHANGE IN THE BETAI FUNCTION. CRITICAL.
The function betai(a,b,x) did not implement the incomplete beta
function B(x;a,b), but rather the regularized incomplete beta
function I_x(a,b). Have changed the order of the variables and
added a new function to implement both
betai(x,a,b) ==> B(x;a,b) Incomplete beta function
betai_Ix(x,a,b) ==> I_x(a,b) Regularized incomplete beta function
It is the regularized incomplete beta function used most extensively
in statistics.
MOT 5/28/2011 - Modified the solve() function to not fail immediately if
the signs of the function at the specified boundary edges are the same.
Instead, try up to maxiter random positions within the range looking
for a sign reversal. Since a random search is made in the interval,
there is no guarentee that the function will return the same root each
time, but it is guarenteed to return a root (or an error). Using a
random search reduces the chance of missing a zero with periodic
functions.
MOT 5/28/2011 - Worked a bit on the logarithmic labeling. Now draws
tertiary labels for short logarithmic axes. Down to the level of
1.1, 1.2, ... for very small ranges (which is really stupid). Also
redid autoscaling so it works much more responsibly.
MOT 2/27/2011 - Increased the maximum length of a continued line to 16384
characters. Also changed alias code so it could accomodate a
definition with almost this length.
MOT 2/15/2011 - Added extensive list of special characters draw capability.
Basically added the entire symbol font code page where I could identify
the symbols and give them LaTeX like definitions.
MOT 1/10/2011 - CHANGE TO @f_test(ar1,ar2) function *** IMPORTANT ***
I have concluded that there was a mistake in coding the @f_test function.
It is now coded such that it explicitly returns the same as
@f_test(ar1,ar2) = f_test(@var(ar1)/@var(ar2),ar1:npt-1, ar2:npt-1)
This is reflected in the new (upcoming) PDF documentation.
MOT 1/12/2001 - Added digamma(x) function
digamma(x) which is also known as psi(x) is the derivative of lngamma.
Now implemented for real arguments only with precision ~10^{-15}
digamma(x)
Like lngamma, it is undefined for 0 and negative integers, and the function
return TMPREAL_MAX.
MOT 1/12/2001 - Modified lngamma(x) function
Now returns TMPREAL_MAX instead of 0 for 0 and negative integer arguments.
MOT 1/12/2011 - Added aliases for inverse trig functions
asin -> arcsin (same for acos,atan,etc.)
asind -> arcsind (same for acosd, atand, etc.)
asinh -> arsinh (same for acosh, atanh, etc.) [Note only arsinh, no arcsinh]
These made sense to do as writing the new documentation.
MOT 1/5/2011 - Added ability to "beep" from the code
_beep(freq, durat)
will beep. Both parameters are optional and default to 880 Hz (A)
and 150 ms. For fun, try
foreach (0,2,4,5,7,9,11,12,11,9,7,5,4,2,0) qev beep(220*2^{(3+%f)/12},200)
MOT 1/4/2011 - Low level file I/O for working with serial ports (and most
USB instruments).
_open_comx(int port [,string baud [,int ms_timeout]])
_get_baud()
_set_baud(string baud)
_get_timeout()
_set_timeout(RdInterval, RdPerByte, RdOverhead, WrByte, WrOverhead)
_open_comx() opens the specified COMx port as a serial Low I/O stream.
The optional baud parameter is a text string describing the flow
parameters, such as "9600,n,8,1". It is parsed by the OS to set
parameters. The third (optional) parameter is a timeout setting for
the _read() command in ms. The total time waiting is the number of
bytes requested plus this timeout (in ms).
setv fd = _open_comx(10,"921600,n,8,1",2000)
opens COM10 with typical USB parameters and a 2 second timeout on read
with no data. Note that
let aline = _read(fd, 2000)
will timeout in 4 seconds (2 for the potential 2000 characters plus
the 2 seconds specified). Once a read receives a character, it will
always timeout after 50 ms if no additional characters are received.
These parameters work very well for USB serial devices, but the code
may need to be modified to handle slow devices (300 baud). If the
rate and timeout values are not given, they are not set and retain
default values from the operating system.
The string for setting baud/flow parameters in _open_comx() and _setbaud()
is identical in format to that required by the Windows "mode" command.
You should not set the com port, but only remaining parameters.
[baud=b][parity=p][data=d][stop=s][to={on|off}][xon={on|off}]
[odsr={on|off}][octs={on|off}][dtr={on|off|hs}][rts={on|off|hs|tg}]
[idsr={on|off}]
The older format works as well. "9600,n,8,1" to specify a baud rate of
9600 with n parity, 8 data bits and 1 stop bit.
The _get_baud() and _get_timeout() functions print (do not return) the
current values of all the communication parameters and the members of
the timeout structures. These can be individually modified using the
_set_baud() and _set_timeout() functions. For _set_timeout(), the
default for any parameter is for no change.
_set_timeout(,,1000) <== only changes the read overhead time
Consider these calls to be fragile and that they may change.
setv fd = _open_comx(10,"921600,n,8,1",500)
qev _write(fd, "1PA?")
eval _read(fd,20)
qev _close(fd)
MOT 1/4/2011 - Low level file I/O capability including serial ports (Win only)
Added low level I/O functions _read(), _open(), etc. and associated
constants.
_open(char fname, int flags [, int mode])
O_CREAT O_APPEND O_TRUNC
O_BINARY O_TEXT
O_RDONLY O_RDWR O_WRONLY
_creat(char fname [,mode])
_close(int fd)
_read(int fd [, int maxcount]) maxcount defaults to 256
_write(int fd, str string)
_query(int fd, str string [, int maxcount]) maxcount defaults to 256
_lseek(int fd, int offset, int whence)
SEEK_SET SEEK_CUR SEEK_END
_eof(int fd)
_tell(int fd)
In general, these directly convert to the equivalent C routines except
for _read() and _query() which return a string up to the size specified.
_query is a sequence of _write() and and and immediate _read().
MOT 10/12/2010 - Added ability to generate arbitrary (text based) labels to
the graph. This replaces the automatic labeling. Must specify lots of
values to use.
Values of the major/minor tick marks (REAL arrays)
Labels at the major tick marks (STRING arrays)
Label size (0 ==> use graph max) (REAL)
This permits more complex non-linear axes (Weibull probability) as well
as non-numeric labels (calendar months Jan, Feb, ...)
alloc s1 string_array 12
foreach ("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec") let s1[%i] = %f
alloc m1 array 12 let m1 = i+1
alloc m2 array 100 let m2 = (1+i/4)
axctrl bot -user m1 m2 s1 0.0
label bot "Month"
reg bot 1 12 force bot yes
axis
or for labeling a non-linear axis
xtop nonlinear
alloc m1 array 12
alloc s1 string_array 12
alloc m2 array 100
foreach (-20 -10 0 10 20 30 40 60 80 100 150 200) let m1[%i] = %f let s1[%i] = "%f"
let m2 = 0
foreach (-25 -15 -5 5 15 25 35 45 50 55 60 65 70 75 80 85 90 95 100 110 120 130 140 160 170 180 190 210 220) let m2[%i] = %f
xtop nonlinear
label bot "1000/T (K^{-1})"
label top "Temperature (\deg C)"
label left "Rate (m/s)"
axctrl bot -nouser
axctrl top -user m1 m2 s1 0.0
reg bot 2 4 force bot yes
reg left -1.0 0.0
alloc m3 array 10
alloc m4 array 100 let m4 = log((i+1)/100)
alloc s2 string_array 10
foreach (0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0) let m3[%i] = log(%f) let s2[%i] = "%f"
axctrl left -user m3 m4 s2 0.0
axis
MOT 10/2010 - Added @covar and @covariance to statistical tests.
@covar(ar1, ar2 [, ul, uh])
@covariance(ar1, ar2 [, ul, uh])
The covariance estimate is 1/N-1 sum (x-\bar x)*(y-ybar y) and is a
measure of correlation in error between two measurements.
MOT 7/2010 - Modified LexGetFile to allow , and ; in filenames without use
of quotes. Should have been done long ago.
MOT 7/2010 - Added code to catch attempts to set NPT beyond NPTMAX on 2D and
3D curves. Works for curve:npt and genplot's NPT when set by the command
LET (either implicitly or explicitly). No more crashes when making NPT
bigger before increasing the curve size :-)
The curve will be automatically resized to 10% beyond the value if
possible (since resize is marginally expensive operation). Gives error
and sets NPT to NPTMAX if the resize cannot be done.
GENPLOT: npt = 10000
INFO: NPT (10000) requested is larger than NPTMAX. Increased curve size to 11000
GENPLOT: npt = 100000000
ERROR: Attempt to set NPT (100000000) to more than GVI_MAX_LENGTH (67108864).
Value set to nptmax (11000)
GENPLOT:
MOT 7/2010 - Added statistical functions to return the mean and sigma when the
the uncertainties (or relative uncertainties) are known. Referred to as the
weighted mean and related values.
@wmean(x, sigma [,ilow, ihigh]) Returns weighted sum, where weighting
@wavg(x, sigma [,ilow, ihigh]) is 1/sigma^2 for each point
@wave(x, sigma [,ilow, ihigh])
@wabsavg(x, sigma [,ilow, ihigh]) Weighted sum of the absolute value of x
@wsigma(x, sigma [,ilow, ihigh]) Uncertainty in the mean given known and
valid uncertainties. This is equivalent to
the standard deviation of the mean in a well
defined statistical environment.
@wvariance(x, sigma [,ilow, ihigh]) Variance as determined from the data itself
@wvar(x, sigma [,ilow, ihigh]) using sigma only as the relative weights
@wstd(x, sigma [,ilow, ihigh]) Standard deviation estimated from the data
@wstdev(x, sigma [,ilow, ihigh]) using sigma only as the relative weights.
@wstddev(x, sigma [,ilow, ihigh]) If sigma's are valid, should be average sigma
@wsdom(x, sigma [,ilow, ihigh]) Estimate of the standard deviation of the mean
using sigma only as the relative weights.
If sigma's are valid, should almost equal
@wsigma(x, sigma)
MOT 6/2010 - Set maximum number of arguments in a function to 32. This
has always been the case but never checked.
MOT 6/2010 - Added ability to specify an array or any other object in the
math namespace as a potential variable in a function defition.
This is really a string replacement that acts to be able to specify an
array in the function definition. The prefix * indicates that a variable
should be taken as being an array.
define f(x,*ar) = ar[0]+x*ar[1]
alloc cp array 2 let cp(..) = 7,20
eval f(7,cp)
define f(*c) = @max(c:y)-@min(c:y)
create y = sin(x) -range -15 15 eval f($plot)
MOT 5/2010 - Increased the buffer for fgets() from 4096 to 32768 chars.
MOT 5/2010 - Added % as a valid terminus on a number for percentage.
GENPLOT: (1+7%/360)^360-1
:= 0.072500883
MOT 5/2010 - Added functions
mantissa(x) - returns signed mantissa of a large number
exponent(x) - returns exponent for scientific notation
Common use fprintf("%.3fE%d", mantissa(x), exponent(x))
MOT 4/24/2010 - Moved lsqfit into GPTFIT so that it shares the cf$[] array
and sets both the values into the function evaluator and the function
fit(x).
MOT 4/16/2010 - Added code so integrals of complex functions along the real
axis are possible. Note, the contour of the integral is limited to the
real axis so upper and lower limits must be real values
MOT 4/7/2010 - Added ability to link to external functions that are functions
of both real and string variables. New linkages
GVLinkFncA()
GVLinkStrFnc()
Used primarily with user routines to extend capability. Needed myself
to create a user module that permits control of GPIB instruments.
MOT 4/1/2010 - modified the solve() function work with double precision
variable rather than single. Get more precise and avoids problems
of significant digits.
solve(tan(x)-1.28E5*x,0.99999*pi/2,(1-1E-8)*pi/2)
Used to fail because (1-1E-8)*pi/2 too close to pi/2 numerically.
MOT 1/30/2010 - Added function to do summation and continued product over
a function.
sum(fnc(var)|var, ilow, ihigh [,istep])
prod(fnc(var)|var, ilow, ihigh [,istep])
For the sum and product, the assumed variable is i, not x. Use the |
format to specify some other dummy variable.
sum(i,0,10) = 55 sum(i,0,10,2) = 30
prod(i,1,10) = 10!
sum(cos(2*pi*n*x)|n,0,100)
The function is fully implemented for both real and complex functions.
Example:
identify -place 5.65 6.0
reg bot -6 6
reg left -1 1
define f(x,n) = sum[ m1n(i)*cos((2*i+1)*x)/(2*i+1), 0, n-1 ]
axis
foreach (1 2 3 5 10 20 50 100 200 500 1000) {
ov -f f(x,%f) -range -6 6 -points 500 -lt 1 -identify "%f terms"
}
MOT 1/29/2009 - Added standard deviation of the mean to the functions
@sdom(array[,istart,iend])
MOT 1/12/2009 - Fixed problem with matrix transpose and added more
matrix manipulation. Matrix transpose failed to relink the X and Y
arrays which created problems in cases where the pointers where
pulled by name rather than structre. Now properly relinked.
Added
matrix put_submatrix
matrix submatrix_put
to insert a matrix into an existing matrix. Nice for making quilts.
Also added aliases
matrix get_submatrix
matrix get_row
matrix get_col
MOT 11/10/2009 - Added ability to write proper csv files.
WRITE -csv
will generate comma separated values (csv) rather than my normal
tab separated values. Allows the file to be opened in Excel without
any further issues.
MOT 11/10/2009 - Added RDF capability for matrix evaluation
MATRIX RDF [-center ] [-bin ]
will generate the radial distribution function from the image in
. By default, the origin of the curve is given by the zero
in the X and Y values of the surface. However, the option -CENTER will
permit any value to be taken for center. The RDF histogram by default
will be 1 average "pixel" of the image (good in most cases). Any value
of the bin width may be specified with the -BIN option. y[i] of the
result contains the average of all pixels in the image at distance
i*dx <= r < (i+1)*dx. Great for TEM diffraction image analysis.
MOT 6/28/2009 - Added ability to clip ANNOTE structures to the graph
area. By default, annote is allowed to draw over the full canvas.
Commands (such as line and arrow) that allow options now permit the
option -clip to limit to the data area.
line -lw 8 -clip 1 7 1 9 2
MOT 6/27/2009 - Added ability to specify an editor for the editor or eps
commands.
declare $editor = notepad
declare $editor = wordpad
declare $editor = "c:\program files\eps13\bin\epsilon.exe"
will cause this to become the editor of choice. Can be set in the
genplot.ini file.
MOT 6/18/2009 - Added new palettes (thanks to Roger de Reus)
More rainbow palettes
GREY GOLD COPPER PINK AFM
HOT HEAT JET
COLD COOL
HSV BONE PRISM
SPRING SUMMER AUTUMN WINTER
MOT USER
To see what these do:
create -surface s1 z=x -range -1 1 -1 1 -rows 256 -col 256
pl s1 -bitmap -palette hot
MOT 4/27/2009 - Change to matrix rotate function. Prior code treated all
surfaces as just a matrix and rotation assumed the same "step size" in
X and Y. Rotations failed to thus maintain proper dimensions when
rotated with non-equal step sizes for rows and columns. The code now
properly handles different X/Y range, but still assumes uniform
spacing of rows and columns across the X/Y range. At some point,
may properly handle non-uniform spacing along X/Y as well, but not today.
MOT 4/21/2009 - Added function to return the last occurance of one string
within another. Optionally starts search offset characters from the
end. Note, if offset is less than or equal to the length of the
needle, it essentially has no meaning.
lastpos(needle, haystack [,offset])
lastpos("abra", "abracadabra") = 7
lastpos("abra", "abracadabra", 5) = 0
MOT 4/21/2009 - Changed isdir, filesize and filedate (and isfile)
functions to more accurately deal with directory name conflicts. A
valid directory now is any string which appended with "/fname" forms
a valid filename. So in windows "C:" is valid, as is "C:////". Any
number of trailing slashes or backslashes are now acceptable (only
slashses in UNIX). Allows isdir("c:/windows/") to return true. For
files, these rules do not hold. A file must be such that the same
string could be successfully opened (as with a read).
MOT 3/25/2009 - Modified the graph window so window remains open when
a new device is created. Each time "DEV PM" is entered, the
existing windows remains and a new window is opened. The old one
may be resized and manipulated -- but the data is static. Windows
that are inactive may be freely killed with the X button. The active
window will continue to complain.
When Genplot or RUMP do exit, all windows are closed now.
MOT 3/25/2009 - Added code in devini() to allow the pm graph window to
(1) change titles based on how many calls have been made
(2) exit cleanly once the main program exits
These are related to the ability to have multiple dev pm windows
open for archiving graphs. Required change to the inidsp msgs.
MOT 3/25/2009 - Modified strip() to enable mode of removing all occurances
of a character within a string.
strip(str [,mode [,char]]) mode = leading/trailing/both/all
MOT 3/25/2009 - Modified code so using "-ply right" works more as expected
with a non-linear axes. Will now automatically make the transform
right_to_left(y) or top_to_bottom(x) to convert the data to the
linear axes and plot as expected. Effectively, the data is indeed
plotted against the non-linear axis.
This is guarenteed to work correctly with simple 2D graphs. Haven't
looked at how this might interact with drawing 3D graphs with
the non-linear axes being selected. If there are obvious bad
hehaviors, will modify the code.
Finally, there is no test that the transforms are valid. Errors
are probably set to zero values, but it is identical to the user
doing the command "let y = right_to_left(y)".
MOT 3/25/2009 - Added functions for string oriented bitwise operations
str = bitor(bstr1, bstr2) str = hexor(hstr1, hstr2)
str = bitand(bstr1, bstr2) str = hexand(hstr1, hstr2)
str = bitxor(bstr1, bstr2) str = hexxor(hstr1, hstr2)
Strings are zero extended to the left, and only the valid characters in
each string are considered.
MOT 3/25/2009 - Added functions that convert strings between hex and
binary representations.
hex2bin("str")
bin2hex("str")
The final string length is presently limited to 8192 characters.
MOT 3/25/2009 - Added more base conversion functions for Pat. The full set
is now:
int2hex(int) hex2int("hex_str") hex_str may be 0x7F
int2oct(int) oct2int("octal_str")
int2bin(int) bin2int("binary_str")
int2base(int,base) base2int("str", base)
The conversion from int to a string is limited to values that
are exactly representable as a simple integer. -2^31 to 2^31-1.
Conversion of a string to integer terminates on the first character
of the string that is not valid in the specified base.
bin2int("11031101") = 6
For base 22 and above, recommend "upcase(int2base())" to avoid the
potential confusion between "1" and "l" ("L").
MOT 1/6/2009 - Modified behavior of sorting for 3D curves to sort on
both X and then Y as the default. Added an option "-XY" which
is explicit, but this is now the default behavior. Sort follows
X, and then if equal compares Y values. It is not possible to
do sorts on XZ or YZ presently. Have to exchange the arrays
first - sorry. But -STRICT, -REVERSE, and -RANDOM are valid.
MOT 11/13/2008 - Added Z to the CULL command. Now can cull based on Z
values in a 3D curve. Only potential side effect is the RANGE
mode for a 3D curve which now requests X,Y and Z whereas in the
past would only request X and Y (even for a 3D curve).
Also add a help to the command.
MOT 11/12/2008 - Expanded matrix command.
MATRIX HISTOGRAM -- generates histogram from a surface. Options
similar to TRANSFORM HISTOGRAM. Use
MATRIX HISTOGRAM -? to see usage
MATRIX MARK -- more generic threshold generator. Can window within
limited area, and change the values set in the Z
array. Again, MATRIX MARK -? gives usage.
MOT 10/6/2008 - Added palette RANDOM to the list. This palette divides the
range into 1024 random colors, with exception that 0.0 is black and 1.0
is pure white. Useful for things like color maps of grain extraction
from an SEM image.
MOT 8/11/2008 - Minor change. Variable result$ is now a DOUBLE rather
than a REAL. Higher precision reflecting machines now moving to
64 bit anyway. Impacts only SOLVE and TRANSFORM INTEGRATE. Should
be completely transparent to users.
MOT 8/10/2008 - Using solve() to solve two simultaneous equations.
Given requirement
f(x,y) = 0 g(x,y) = 0
Idea is to use solve on f to determine x as function of y and then
substitute into g and determine the y that satisfies the equation.
As a specific example, consider
f(x,y) = cos(x)-y = 0 g(x,y) = sin(x)-y = 0
which has the obvious solution at x=pi/4, y = sqrt(0.5). To be
coded
define xr(y) = solve(cos(x)-y,0,pi/2) /* Gives x satisfying f(x,y)=0
setv yroot = solve(sin(xr(y))-y|y,0,1)
setv xroot = xr(yroot)
eval xroot,yroot,pi/4,sqrt(0.5)
MOT 8/8/2008 - Warnings on dummy arguments with functions
The dummy variable for functions of functions should normally not be
used otherwise. Many cases work, but other will fail as the program
can't always figure out what is dummy and what is real. Examples:
create y = dydx(cos(x),x) <== happens to works since the second x is an array
create y = dydx(cos(xp)|xp,x) <== safer and definitely more clear
define f(x) = dydx(cos(x),x) <== fails - what's an argument versus dummy
eval f(pi/4)
define f(x) = dydx(cos(xp)|xp,x) <== works properly
eval f(pi/4)
MOT 8/8/2008 - Implemented function to numerically integrate an arbitary function
integrate(fnc,a,b,[,eps,mineval,maxeval])
fnc - arbitrary function, possibly with specified dummy variable
a,b - limits of the evaluation
eps - fractional change signifying convergence (default = 1E-5)
minevals - minimum number of evaluations of function (default = 256)
maxevals - maximum number of evaluations of function (default = 65536)
The algorithms is robust for almost any function, but not necessarily the most
efficient for a specific function. In the future, there may be an additional
parameter specifying the algorithm.
Current algorithm is an extended Simpson's method based on recursive Trapezoidal
rules. Evaluation of the function is closed (end points are evaluated) on a
uniformly spaced grid. Because of the recursion, the number of evaluations will
always be a power of 2. The number of point is doubled at each iteration, until
the fractional change in the estimate of the integral is less than eps. The
minimum number of evaluations is necessary to ensure that significant features of
the function are not missed in a coarse grid (integrate(gauss(x,0,1),-100,100) and
the maximum is to limit the total time. For extremely complex but smooth
functions, it is reasonable to reduce the minimum number of evaluations to as few
as 16.
The value of eps can be decreased for greater accuracy, but numerical stability
has to be considered, especially for integrals that have an exact value of 0.0.
Since a fractional accuracy is required, the number of iterations can increase
to a point where the finite precision of the representation prevents convergence.
For example, integral(sin(x),0,2*pi,1E-6) requires 2^24 evaluations to converge
and gives a less accurate answer than 2^16 evaluations.
eval integral(sin(x),0,2*pi,1E-6,,2^16)
eval integral(sin(x),0,2*pi,1E-6,,2^24)
eval integral(sin(x),0,2*pi,0.01)
The first generates a convergence error, but is more accurate than the second
which requires 16 million evaluations of sin(x). The last, with only 0.01
required relative precision gives 3.5x10^-16 with roughly 1024 evaluations -- the
same result obtained 2^24 (16 million) evaluations.
For most functions, specifying a precision below 1E-6 is unnecessary, but for
non-pathalogical cases doesn't invoke significant cost. Convergence is O(1/N^4)
where N is the number of evaluations so one additional "iteration" results in
a 16x reduction in the error. For specific case integral(sin(x),0,m*pi):
# evals m=1 m=3 m=5 m=13 m=25
4 2.0045598 2.8720866 -4.7868110 18.041038 50.113994
8 2.0002692 2.0255403 2.2914919 -6.6257091 -18.404748
16 2.0000166 2.0013951 2.0116391 2.1680170 -7.3241477
32 2.0000010 2.0000845 2.0006641 2.0074576 3.1646882
64 2.0000001 2.0000052 2.0000406 2.0004333 2.0305619
128 2.0000000 2.0000003 2.0000025 2.0000266 2.0016485
256 2.0000000 2.0000000 2.0000002 2.0000017 2.0000996
512 2.0000000 2.0000000 2.0000000 2.0000001 2.0000062
1024 2.0000000 2.0000000 2.0000000 2.0000000 2.0000004
A very different issue arises in periodic functions where the data is undersampled
and aliasing effects occur. In these cases, the value may appear to be converging
while undersampled, and then the value rapidly changes. Consider periodic function
integral(sin(x)^2,-100,100)
# evals value
4 17.725614
8 17.760894
16 17.76304
32 17.763174
64 17.763182
128 127.41730
256 100.45674
512 100.43762
1024 100.43671
2048 100.43665
4096 100.43665
8192 100.43665
16777216 100.43665
After 64 evaluations, the value appears to be converging to 17.76 -- a direct
result of undersampling the periodic function. Only after 128 points is there
sufficient "resolution" to see the full behavior of the function. The worst
possible scenario is represented by the function integral(sin(x),-30*pi,30*pi)
Finally, some functions are poorly sampled with an equispaced grid -- such as
1/x or similar functions with singularities. A coordinate change can quickly
improve the convergence and accuracy.
int 1/x dx let u = ln(x) yielding x = e^u and dx = e^u du
int_a^b dx = int_ln(a)^ln(b) du
where the latter is obviously trivial. A slightly less trivial example with
a similar result after the same substitution
int [ 1/x sqrt(1+x^2) ] dx = int [ sqrt(1-e^2u) ] du
and indeed
integral(1/x*sqrt(1+x^2),0.0001,90) = 98.897996
integral(sqrt(1+exp(2*u))|u,ln(0.0001),ln(90)) = 98.897996
Hoever, the first requires 8 million evaluations to converge while the latter
converges with only 128 evaluations. Plotting the two functions makes it
obvious why the latter is so much more efficient.
Bottom line: caveat emptor - let the user beware. Know the basic behavior of
the function before using the integral function blindly. It is
trivial to plot it over the interval first before doing the
integral. This may also suggest the possible coordinates change
required to improve the integration.
Any of the optional arguments may be left out and the default will be used.
integrate(cos(x)^2,-pi,pi,1E-10) /* Higher precision
integrate(cos(x)^2,-pi,pi,,32,128) /* Fewer evaluations same precision
Errors include inability to evalute the function or failure to converge.
Using integral() in a create a curve is valid, but may not be the most efficient.
create y = sin(x) -range 0 2*pi -points 201 transf y integrate
create y = integral(sin(x),0,x) -range 0 2*pi -points 201
are similar. The first is much faster as it requires only 201 evaluations
of the function, while the second typically requires 200,000 evaluations.
However, the first is increasing accurate for later points -- y[199] is a good
approximation of the full integral while y[1] is probably pretty poor. In
constrast, the second is uniformly accurate (fractional) for all points. The
error at x=pi is 1.5E-4 (from the exact value of 1.0).
Examples: eval dydx(cos(x),pi/2)
create y = dydx(cos(theta)|theta,x)
eval dydx(f(x),0,1E-3)
MOT 8/8/2008 - Implemented function to numerically differentiate an arbitary function
dydx(fnc,x[,dx])
fnc - arbitrary function, possibly with specified dummy variable
x - point to evaluate the derivative
dx - optional argument specifying the "delta" for evaluation
Simple numerical dydx(f) = (f(x+dx)-f(x-dx))/(2*dx).
Examples: eval dydx(cos(x),pi/2)
create y = dydx(cos(theta)|theta,x)
eval dydx(f(x),0,1E-3)
MOT 8/8/2008 - Implementing solve also allows implementing similar "functions
of functions". Have two more:
dydx(fnc,x) derivative of function at x
integral(fnc,xl,xh) integral of a function between limits
integrate(fnc,xl,xh)
dydx is fine - simple numerical derivative.
integrate is reasonable and may improve with time
MOT 8/8/2008 - Major new functions: solve(fnc,low,high) which will return the
root of the function, assuming one exists between low and high. This
required considerable code modification and should be considered
"fragile" for the moment. Allows full functionality of the SOLVE
command as an inline evaluation.
solve(fnc, xlow, xhigh)
solve(fnc, xlow, xhigh, guess)
solve(fnc, xlow, xhigh, guess, epsilon)
solve(fnc, xlow, xhigh, guess, epsilon, maxiter)
Guess, epsilon and maxiter are optional but must be given in order. If
defaults are to be used, argument may be skipped.
solve(fnc, xlow, xhigh,,,500)
is valid.
The fnc must involve a dummy argument, with "x" as default. Any other
dummy may be specified by following the function with |. Examples:
cos(x) <== x is the dummy argument
v/20+sin(theta)|v <== v is the dummy argument
The function can be arbitarily complex and may involve arrays or any
other calls. Don't get too fancy with string arguments however - remember
these are still fragile.
Examples:
eval solve(cos(x)-x,0,1) /* Simple use
create y = sin(x) -range 0 1 -by 0.1 /* Array use
eval solve(sin(xp)-y|xp,0,pi/2) /* Should be 0,0.1,0.2,..
MOT 8/8/2008 - The parsing and evaluating of expressions was rewritten to
be fully re-entrant. Required to enable the solve(fnc,xl,xh) function.
MOT 8/8/2008 - Added operating system functions to function evaluator.
chdir("dir") | cd("dir")
mkdir("dir")
rmdir("dir")
rm("file") | unlink("file")
mv("old", "new") | rename("old", "new")
All return 0 if successful and -1 if unsuccessful. No further error
messages are available.
NOTE: Unlink and rm aren't exactly the same, but for most will behave
the same. At the program leve, use remove() or unlink() functions.
MOT 12/16/2007 - Added EVAL -?, EVAL -LIST and EVAL -DETAIL commands to be
able to list all the internal functions available. See eval -? for
usage of the -LIST, -DETAIL and -APROPOS options.
MOT 12/11/2007 - Modified parse and eval functions so error during read will
turn off after specifed $maxreadwarn. Only valid for ASCII reads.
Previously, parse or evaluation errors would continue to post even
after the warnings from READ had been max'd out.
MOT 12/9/2007 - Removed the @t1_test(ar,mean). Modified the @t_test function
to accept two formats.
@t_test(ar1, ar2) - Student t-test on two arrays of values
@t_test(ar1, rval) - Student t-test against anticipated mean
As these are the two most common uses of the t-test, it made sense
to put in the code effort to detect the two usages.
Others remain as defined. Full list:
@z_test(ar, mean, sigma) - Normal distribution comparions of
array with known parent mean and standard deviation. Returns
the two sided probability
ndtr(|z|)-ndtr(-|z|) = 2*ndtr(|z|)-1
@t_test(ar1,ar2) - Two sample t-test assuming the same variance
@t_test(ar ,mean) - One-sample t-test for specified mean.
@u_test(ar1, ar2) - Two sample t-test assuming unequal variance
@td_test(ar1,ar2) - Two-sample dependent T-test. Used when
repeated measurements are made of a sample population and
existence of a difference is sought. Tests (ar1-ar2) and
compares mean with 0.
All @t_test(a,b) functions returns the probability that a random set
of samples from the same data set would have a t value less than or
equal to the observed value. 99% ==> that only 1% of the time would
the agreement be this poor. Read @t_test as the confidence that the
NULL hypothesis (that the means are the same) can be rejected.
Note: The @u_test does not truncate the DOF parameter to integer, but
uses the analytical extension of the Student t-test for real
values of DOF. Thus values may differ slightly from other
implementations of this test.
MOT 12/7/2007 - Made leading whitespace ignored in datafile reads. This
applies to all input - leading and trailing whitespace is
removed before any tests are done on the data lines. Allows
@end to be anywhere on line, as well as comments (/*, etc) to
be indented without errors.
MOT 12/1/2007 - Increased maximum size of string in printf command to 8192 chars.
This fills 100 lines on the screen with 80 characters / line.
MOT 11/29/2007 - Added functions to take list of values immediately
ave(a,b,c,d) average(a,b,c,d) mean(a,b,c,d)
std(a,b,c,d) stdev(a,b,c,d)
mean(a,b,c,d)
count(a,b,c,d)
@count(y [,ilow,ihigh]) <== just returns y:npt or ihigh-ilow+1 limited
For complex arguments, average and stdev treat real and imaginary parts as
independent. For median, first sort on real and secondarily on imaginary.
Count obviously doesn't matter. @count doesn't have a real/complex question.
MOT 11/21/2007 - Added option "-sum" for sort -strict modes. Similar to -delete
and -average, but totals up values in the equivalent X values
MOT 11/22/2007 - Added the @t1_test(ar,mean) function to complement the
@t_test(ar1,ar2). The @t1-test function tests an array
compared to an expected mean value using the one-sample
Student t-test. The @t_test(ar1,ar2) uses the two-sample
independent Student t-test assuming the same variance. The
@u_test(ar1,ar2) drops the assumption of the same variance.
@t_test(ar1,ar2) - Normal 2 independent sample Student
t-test on samples take from population assuming the
same variance.
@t1_test(ar1,mean) - One-sample Student t-test for having
specified mean.
@z_test(ar1, mean,sigma) - Z test on array for known mean
and standard deviation of the parent. This is just
the normal distribution so returns two sided probability
ndtr(|z|)-ndtr(-|z|) = 2*ndtr(|z|)-1
@td_test(ar1,ar2) - Two-sample dependent T-test. Used
when repeated measurements are made of a sample
population and existence of a difference is sought.
Basically for arrays x,y looks at x[i]-y[i] and
tests if the mean is consistent with 0.
MOT 9/9/2007 - Added config command "MACHINE_INFO" which just prints out
the info returned by a uname() system call. Useful in
debugging, but not generally of user value.
MOT 9/7/2007 - Added ERRPRINTF command. Similar to PRINTF command line
but outputs with in the "error mode" with a beep and the
alternate (default red) color. Useful to say something
went wrong. For Windows users, there is also now the
MessageBox command which puts up a system modal pop-up
message box. This can be used to really get attention.
MOT 9/7/2007 - Added MESSAGEBOX command. Use MessageBox -? for help.
This command is only available in the Windows version.
MOT 8/26/2007 - More matrix changes
pl -contour -at
draws single contour at specified value. Also increased
the size of structures for a contour to permit up to 1E6
points in 2000 chains.
MOT 8/26/2007 - More matrix changes
matrix flatten
fits the surface to a plane and then subtracts off the
non-zero part. Leaves average of the surface unchanged
while flattening out. Does use the x,y values of the
surface if given, but only impacts if non-uniform grid.
MOT 7/4/2007 - Curves and surfaces now define the internal variable :IDS
pointing to the identifier string. Consistent with all the other
sub-element format. The following commands are now synonymous.
let mycurve = "an identifier"
let mycurve:ids = "an identifier"
retr mycurve let ids = "an identifier" archive mycurve
MOT 6/14/2007 - On PASTE_CLIPBOARD in XGENPLOT or XRUMP, any characters
are converted to spaces. This will enable pasting of columns
from Excel directly into a "read <<" command. The fundamental
origin of the problem is that a is interpreted as a
request for filename completion on normal input. When pasted,
doesn't get handled properly. Fortunately, cannot think of any
reason that a paste operation would want that completion
capability, or why a Paste would really need a character.
MOT 6/8/2007 - Modified the handling of the "X" button on PMDRIVE. It now
gives a warning and message for how to restart the window
if accidently closed.
MOT 6/8/2007 - Modified the handling of the "X" button on XRUMP and XGENPLOT.
This button will now properly terminate the program with
extreme prejudice, but only after warning the user. You
must select the RETRY to get it to fully close. So far, it
seems this will not leave the orphan processes in task
manager so is fully effective. However, nothing is done as
far as cleanup or careful process termination. Use with
discretion.
MOT 6/8/2007 - Added ability to sort strings and arrays. The "sort"
command takes optionally a string_array or real_array
argument and sorts with limited options.
SORT [curve | array | string_array] -options
SORT -curve -options
SORT -array -options
SORT -strings -options
Added the option -NOCASE which causes the sort to ignore
case for strings.
MOT 6/7/2007 - Added reading of strings to the generic "read" of GENPLOT.
alloc c1 string array 500
alloc c2 string array 20
read test.dat -list c1 1 c2 2 /
will work as expected.
MOT 6/4/2007 - Modified behavior of several array types (string, int, etc.)
in both the allocated and linked version. Should mostly have
the :npt element now with the size. The size can always be
obtained with sizeof(), but now also :npt. Using :npt allows
the "size" to be actively changed, such as number of active
elements in a string array.
Have to be careful -- c1:x is an array, but c1:x:npt is
not defined. Only one level of "indirection" permited.
MOT 5/31/2007 - Added "GOLD" as a rainbow palette choice
Added "USER" as a rainbow palette. This uses an internal
array $RAINBOW[] of 1024 elements as a lookup for the colors.
palette rainbow user /* Required to link $rainbow
setv nz = $rainbow:npt-1
define blue(x) = 255*(1-2*abs(x/nz-0.5)) /* New transformation
define green(x) = 255*(1-x/nz)
define red(x) = 255*(x/nz)
let $rainbow = rgb(red(i),green(i),blue(i)) /* Reset the $rainbow array from grey
create -surface s1 x^2-y^2 -range -1 1 -1 1 -grid 512 512
plot s1 -bitmap
If you come up with a good color scheme, send me the eqns
and I'll put it in as a hard-coded rainbow palette. Suspect
only power users will use this feature.
MOT 4/18/2007
Added functions
time2double(str)
time2float(str)
to convert a time string into seconds. Handles only hh:mm:ss.s and
does very limited error checking. Terminates conversion on first
invalid character. Acceptable convertable numbers include
18:24:16.427 - normal 24 hour time sequence
7378:16.6 - minutes beyond 60 - acceptable with seconds
918462.2 - just seconds with large number
1827:24:17.7 - hours beyond 24 - acceptable
26:84:93.6 - accepted, but pretty much meaningless
MOT 3/28/2007
Added an option -nearest to 3d_grip to return the value of the
nearest known X,Y,Z triple. No interpolation.
MOT 3/1/2007
Increased the maximum number of columns that can be read in a single
READ -LIST
from 40 to 128. Still has a hard
limit but hopefully now enough to get by most needs.
MOT 11/30/2006
Implemented a -FOR into the possible limiting parameters
of the general fitting functions.
fit -linear -for abs(x)<5
will now work. Per request from Roger
MOT and IJT - 10/1/2006
Added the commands PUSHD and POPD in correspondence with the command
prompt in XP equivalents
PUSHD - saves current directory and does as DCD (disk and drive change)
POPD - restores previously PUSHD'd directory
PUSHD may be called multiple times, maintained as a link list.
[Ian Thompson's first addition to the program -- next owner :-) ]
MOT - 7/18/2006
By request increased string length for echo from 255 characters to
LONG_STR_SIZE (currently 2048 characters)
MOT - 7/12/2006
By request:
LexEqual(str1, str2, minlen)
minlen > 0 ===> string comparison is case insensitive
minlen < 0 ===> string comparison is case sensitive
absolute value of minlen is minimum number of characters that
must match.
Returns: 0 ==> not a match
1 ==> matches all characters, but strings not identical
2 ==> exact match (within case requirement)
Leading spaces are always ignored in the strings.
MOT - 6/26/2006
Added new matrix commands to find peaks in a matrix, and to put/extract
fractional rows and columns
matrix row_peak
matrix col_peak
Returns the maximum in each row or column (respectively) returning the
data to the main curve.
matrix extract_data
[ROW | COLUMN]
matrix insert_data
[ROW | COLUMN]
These commands begin at the specified row/column in the matrix and either
extract or insert the data from the main curve into the matrix. Outside
of establishing the start point validity, there is no checking on range
validity of the insertion or extraction -- up to the user to preset the
value of NPT and ensure that all references remain within the array.
MOT - 5/14/2006
Removed most of the artificial limits on the 3D matrix read of
unformatted data. This applies to files that are just a sequence
of numbers without headers indicating the size.
read unformatted.3d.data -surface s1
It will read up to a maximum of 33,554,432 (2^25) columns or rows, with
a maximum of 2^26 data points (rows x colums). This limit is set by the
256 MB maximum for an allocated memory block in NT. The read will
actually continue until the memory allocation fails, so 64 bit versions
of the OS may raise this limit. The number of data objects on the first
line sets the column dimension -- from that point on, format in the file
is ignored and data is read sequentially assuming row by row ordering.
MOT - 5/12/2006
Added some internal variables to control the GENPLOT read.
$MaxReadWarn - number of times a warning is printed on read
$MaxReadErrors - number of errors allowed during read
Warning messages are printed up toe $MaxReadWarn times,
then just a message saying there may be more errors. Read
continues until $MaxReadErrors is reached, and then an
error message is printed and read aborts.
The previous behavior printed 20 WARNINGS and then just
continued without messages forever.
These variables are only linked on the first use of the READ
command. To be sure they are linked, a new option is added
READ -DUMMY
which does nothing but link the variables.
[For Pat Smith and Roger de Reus]
MOT - 3/22/2006
Added the option -PALETTE to plot/overlay which allows temporary
setting of colormap basis for "rainbow" types graphs. This includes
the new bitmap drawing.
MOT - 3/22/2006
Finally implemented the -BITMAP option in plot. Works only for
plotting a surface, but will draw it as a series of filled rectangles
to the screen.
pl s1 -bitmap
Ideally the axis should be drawn after a bitmap, but this breaks too
many nice features in overlay. Lets bitmaps be small regions of the
screen as well.
Useful options to -bitmap mode
-zrange - Overrides the range
-grid - Limits the number of elements in the bitmap.
Every other / every third, etc. will be selected until then
number to be drawn is less than specified. Becomes an issue
when a surface is on the order of 4000 points.
-palette [afm | ...] - see above, override the color palette
MOT - 3/21/2006
The -rainbow option in plotting a 3D curve now uses continuum colors
rather than the discrete 16 color previous. Don't see a reason to
maintain the old format.
MOT - 3/21/2006
An issue of working with "subroutine" macros is the challenge of
dealing with directories for the macros. Modified XEQ command to
enable "look for macro in directory of the current running macro".
xeq c:\macros\test
xeq <>/sub1 - Actually runs c:\macros\sub1
xeq []/sub2 - Actually runs c:\macros\sub2
Either <> or [] as leading characters is accepted.
MOT - 3/20/2006
Got comment on painful printing of infinite error messages on some
calculations. I agree this is a major annoyance and have taken some
efforts to eliminate. Math evaluation will now print a maximum of
25 error messages per "parse" of a function. This won't eliminate
all of the problems, but many.
MOT - 1/9/2006
Many new functions to deal with 2D fft analysis of images and radial
distribution functions.
matrix peak_detect
Returns curves with X,Y,[z] coordinates of peaks (higher than all
8 neighboring cells). Makes best estimate of the interpolated
position within the space. Fits row/col directions to quadratic
and estimates peak position and height. All 8 are used to
establish that is a peak (must be absolute peak) but only 5 are
used to estimate position (not fitting to all). If two points
are at exactly the same height (max), neither will be accepted
as a peak. Use "let z = z+0.001*ndtri(rnd())" to avoid this if
it is an issue with quantized data.
matrix threshold
Modifies surface so points >=sz are 1 and below are 0
matrix window
Modifies surface so points are 1 where zlow
Threshold at sz plus requirement of at least cnt points above
threshold for along each column/row. Isolates regions that
would connected through only a one or two pixel "bridges".
transform rdf
[-FAST | -MIRROR <3|2|1|0>]
[-BOX ]
Performs real space radial distribution operation on the
2D or 3D data. Each bin return contains density of point
(per unit area) in an annulus (or shell) centered on the
X value. Total of extending for 0 option.
| 2D | 3D |
-------------------------------------------------------
0 | no mirroring | no mirroring |
1 | mirror edges (4) | mirror faces (6) |
2 | + mirror corners (4) | + mirror edges (12) |
3 | (same as 2) | + mirror corners (6) |
-------------------------------------------------------
The default is 3, all mirroring. This will give the proper
rdf for r < L (minimum box size).
The size of the box is automatically determined from the
span of the data. The -BOX option overrides this and
enforces a PBC length (same for all dimensions).
MOT - 1/7/2006
Incorporated reading of Digital Instruments AFM images into a
surface directly. This is not guarenteed to work in all cases, but
it works reasonably well on the data I've dealt with. Makes sense
to put it generally available.
read -AFM [-frame ] [-rows ...] [-cols ...]
The optional frame specifies the image within the file. Default is
to read the first image stored in the data file. The rows and
columns may be specified, but are presently ignored, and probably
will stay that way. Output is hopefully with all parameters in
nanometers.
MOT - 1/7/2006
Added matrix transformation to yield the S(Q) from the 2D FFT
MATRIX S(Q)
Takes the surface and generates the S(Q) averaged over for all
equivalent Q vectors, assuming 0,0 is in the center of the image.
This function is intended to be called only after a 2D FFT, though
the math will work on any image. Generates the spatial inverse of
the radial distribution function S(Q) (hence its name).
2D
MATRIX FFT s_data s_power -power
MATRIX S(Q) s_power
plot -lt 1
Data near Q=0 is, as expected, noisy. s_power:x values are assumed
also to be the true Q values and are copied to the X coordinate of
the resulting curve. Curve size will be smaller of rows/2 or cols/2
of the source matrix. Again, expect rows = cols from the FFT.
MOT - 12/13/05
Added the function
@2DIntegral()
which evaluates the 2D integral of a surface structure. Sorry, at the
moment there is no subtlety to the integral - straightforward trapezoidal
integration. Each point is assigned an area equal to 1/2 distance to
each x/y line.
MOT - 12/13/05
Modified SETVAR to include ability to specify an integer type in sets.
setvar [-int | -real | -double | -complex | -filetpr]
Added -? as help also.
MOT - 12/1/05
Added interpolation from surfaces.
@zinterp(surface, x, y)
Attempts to interpolate between points on a surface. Uses constant
extrapolation beyond the edges of the surface. Attempts to properly
handle X,Y scales on surfaces, but don't push too hard (ie. unsorted).
Basically gives an analytical way to determine values off of a
surface for complex functions.
MOT - 12/1/05
Finished the 2D FFT capabilities. Linked under the MATRIX commands
matrix FFT -options
-power Power spectrum (default)
-magnitude Amplitude/magnitude
-dB Power in dB
-real Real part of the FFT
-imag Imaginary part of the FFT
-PSD Normalized as a spectral density (per freq^2)
-square Square windowing function (default)
-parzen Parzen windowing
-welch Welch windowing
The windowing characteristics are important only if trying to look at
dB. Not nearly as useful as in 1D. Must be better windows for 2D
somewhere.
Results of the FFT have the DC frequency at x[npt/2] (ie. x[512] in
a 1024 point matrix. Frequencies run from -fmax to +fmax-df.
Matrix *must* be 2^n x 2^m in size. Only tested extensively for
square matrices - hopefully no issues if rectangular. Spatial
frequencies are correct based on the X/Y scales of the source data.
Compare to DI AFM results, there are factors of 2 floating around.
The maximum POWER and maximum AMPL are both 2X less than reported
by DI. This makes no sense (power should go as amp^2), so I choose
to leave believing that I have it correct.
MOT - 11/30/05
Added -BMP as alias to -BITMAP for read/write
Added -COLORMAP as alias for -PALETTE in BITMAP write
MOT - 9/22/05
Document: To set elements of an array
let p(0..5) = 1,2,3,4,5
You must use parenthesis () and not brackets [] and exactly two dots.
But then you get to set multiple values in a single line. Has been in
the code for a long time, but never could remember the format. Need to
have a let -? options. (SO DO IT)
MOT - 4/18/05
Added the options -file and -name to read command. These versions go
through LexGetStrExpr instead of LexGetFile allowing use of string
expressions for the name.
read -name sprintf("die_%d.%3.3d", die_num, die_spot)
will work properly. The previous mode, not using -name, will fail
read sprintf("die_%d.%3.3d", die_num, die_spot)
with the message the "sprintf("die_%d.%3.3d", die_num, die_spot)"
cannot be located as a file.
MOT - 4/8/05
Modified the initialization of the print orientations so the clipboard
and metafile default to rotated (normal viewing on screen and
Powerpoint). The values are not defaulted in the genplot_.ini
distributed.
MOT - 3/15/05
Modified the bitmap writing routine so that it corresponds to the X/Y
values if they exist. X will increase left to right, and Y bottom to
top. This, by default, is the same as before with first line being at
the bottom.
MOT - 3/12/05
Modified the WAIT command to include a timeout capability. However, it
doesn't work with XGenplot (no way to peek the keyboard buffer). Needs
more code to be done correctly. But did get the function to return a
code $key with the character pressed to terminate the wait.
MOT - 2/22/05
Finally fixed and implemented properly the contour function. Based in
the plot function. 2D plotting of a surface now defaults to contours.
read 9e.bmp -bitmap b1
plot b1 -contour -dz 40 -dz2 5 -zscan 10 150 -rainbow -zrange 0 150
-contour Contour map (see more options below)
-dz -dz2 Major and minor intervals on contours
-zscan Start and ending values for contours
-zrigid (default) Draw only points from contour
-zspline Fit contour points with spline for curve
-zsmooth Fit contour points with smoothing spline
-rainbow [-zrange ] Use pretty colors, with defined range
create -surface s1 x^2-sin(2*pi*y^2) -range -1 1 -1 1 -row 500 -col 500
pl s1 -zrange 0 1 -rainbow
MOT - 2/22/05
Added alias -export as command to print out all aliases in a form that
can be directly added to a command file to regenerate. Just convenience.
MOT - 1/19/05
Alternate (and better way) to handle Ultratech problem is to add
ability to specify the column delimiters in an ASCII file read. This
now exists with the -DELIMITERS "list" or -DELIMS "list" option. To
read based on & separating columns, use
read test.dat -col 1 3 -delim "&"
or for tabs
read test.dat -col 1 3 -delim "\t"
Not that it is not necessary to C-quote the string ("`\t") though the
effect is the same.
MOT - 1/19/05
Forced to add option "-noexpressions" to the READ command in ASCII
format to deal with Ultratech :-(. It prevents attempting to read
expressions in a data file, returning error if there isn't a valid
number in the column.
MOT - 1/19/05
Added transform "VALUE_PAD" which pads to points with a specified value
TRANSFORM VALUE_PAD
Very similar to ZERO_PAD except allows another value to be used.
MOT - 11/9/04
Added ability to do a histogram on an array without first copying it
to the main curve. Also changed so that histogram takes all options
that were created in the 2D histogram.
transf hist -array wafer:z -dx 0.08
transf hist -? works
Old format is also still valid.
MOT - 10/21/04
Added a 2D histogramming module. Takes and X-Y curve and populates a
2D histogram with frequency. Surface will be created if necessary to
hold the values.
transf 2d_hist [-options]
Lots of options. Use transf 2D_Hist -? to get list.
MOT - 10/21/04
Added option to TRANSFORM HISTOGRAM [-CENTER] which causes it to
work like -CBIN
MOT - 10/18/04
Added @stddev() as abbreviation for @std()
Added @nearest(curve, x,y)
@3d_nearest(curve, x,y,z)
which return the index of the point "nearest" in distance from the
specified values.
MOT - 9/23/04
Added a couple of functions for working with binary files and
figuring out the format. Mostly tired of writing C programs to
determine if binary files are simple real numbers for some OS.
int2hex(ival) hex2int("string")
float2hex(rval) hex2float("string")
real2hex(rval) hex2real("string")
double2hex(rval) hex2double("string")
Use if obvious. Hex2