
g.mccormick (Customer) asked a question.
This example was created to use a 2D array to hold known X,Y data for doing linear interpolation and finding an unknown Y value from a given X value.
The Array MUST be setup so that total number of X,Y pairs is the ROW and it has 2 columns. The X values MUST be in the (ROW),(1) element. The array must be ordered in increasing X values. In my example, I created an array for R507 PT data in which the X (PSIG) is in the (ROW),(1) elements and the Y(deg_f) is in the (ROW),(2). To call the task, the array is copyed to a working array, and you must set the value for maximum array rows to test. This gives flexibility so that if you have a data set with 17 rows, you copy your dataset array to the working array and set 17 as the max.
The called task will test for array faults, underrange and overrange. For underrange or overrrange, the y value will be set to value associated with lowest or highest X value in your dataset array.
With this task, you can build datasets of fairly arbituary length, however, understand that the more elements needed to loop through the longer the CPU will take. I feel this is a cleaner approach then having to call the scale non-linear instruction multiple times for large datasets.
Give this a try and let me know what you think.
BTW. I tested in the simulator but I did not bother to load this into an actual CPU.
@g.mccormick (Customer) Thanks for sharing! Would you be interested in posting your example in our community GitHub library?
https://github.com/AutomationDirect/Customer-Project-Examples
Sure I would.
To call the task, the array is copyed to a working array, and you must set the value for maximum array rows to test. This gives flexibility so that if you have a data set with 17 rows, you copy your dataset array to the working array and set 17 as the max.
I suppose that you could specify start row and end row (and array size) for more flexibility -- depending on application, of course.