当前位置:首页 >> 机械/仪表 >>

nsoft(GlyphWorks)官方培训资料


GlyphWorks Quick Start Guide

2.2

Copyright: ? nCode International 2005

2

Contents:
Introduction
How to use these Tutorials

4

8

Tutorial 1—Data Import and Display
This tutorial shows you how to use the GlyphWorks user interface and shows how ASCII time signals can be imported into the software. Basic GlyphWorks display tools are shown that allow data plotting on the screen.

Tutorial 2—Basic Signal Analysis
This tutorial introduces some of the many analysis capabilities of GlyphWorks. GlyphWorks’ process driven interface helps make complicated analyses intuitive and fast. In particular, an introduction is provided on Amplitude Probability Analysis, Statistical Analysis, Rainflow Analysis and Frequency Analysis. The theory behind these is briefly discussed and examples are given to illustrate the differences between good and bad data.

20

Tutorial 3—Data Manipulation
Here you are introduced to some of the many tools used for manipulating data. We will take the anomalous data found in Tutorial 2 and use the tools to ‘clean’ it for further analysis. In particular, you will extract a sample of good data from a much larger time signal, edit the data using an interactive graphical editor, then remove the Spikes and carry out Frequency Filtering.

32

Tutorial 4—Stress Life (SN) Fatigue Analysis
After an overview of SN theory, GlyphWorks is used to calculate the stress fatigue life of the component introduced in tutorial 3. Next, demonstrations of the Glyphs are shown for advanced sensitivity studies, ‘what-if’ scenario calculations and ‘Back’ calculations.

46

Tutorial 5—Strain Life (EN) Fatigue Analysis
After a brief introduction to EN theory, GlyphWorks is used to calculate the strain fatigue life of the component.

58

3

Introduction

How to use these tutorials

GlyphWorks 2.2—Introduction
4

Welcome to the ICE-flow GlyphWorks 2.2 tutorials. This introduction covers: ? ? ? Conventions used in the tutorials How to get more information from the GlyphWorks online documentation An overview of the contents of each tutorial

Introduction

Introduction
There are 5 tutorials designed to get you quickly up to speed with the GlyphWorks 2.2 advanced engineering software. These tutorials are designed to get you up and running via a series of step by step procedures. By working through the tutorials you will learn how to use the advanced functionality of GlyphWorks. More importantly you will learn how to process data files and carry out fatigue analyses. The tutorials should take about an hour each to complete and assume no prior knowledge of GlyphWorks or other nCode products such as nSoft. Each tutorial builds on what you learned from the previous tutorials, so you should do them in sequence. For example, the input file used in the fatigue analysis in tutorial 4 is produced at the end of tutorial 3.

Conventions used in the tutorials
The page layout of the tutorials is designed for ease of navigation: the procedures to follow are shown in the green panel, while more detailed explanations of the analysis and screen shots are shown in the body of the document.

Where you see this icon there is a link to more detailed GlyphWorks help The procedure Panel gives concise instructions on what glyphs and files to choose , and what analysis options to use in the examples.

Information panels give you theory and a conceptual background to the tutorial.

A description and introduction to the work and analysis is contained in the main body of the document along with relevant screen shots to help you through the analysis.

5

Introduction

Getting more information!
GlyphWorks (which is a part of the ICE-flow suite of programs) has comprehensive online documentation (over 800 pages of it). We have put signposts into the online documentation pointing to where you can get further information about the topic you are doing. For example: See glyphref.pdf page 27 for more detail

Further Training
As its name suggests, this guide is intended as a Quick Start into GlyphWorks. Whilst preparing the guide I have tried to introduce each topic by a brief technical description of the analysis being undertaken. This guide is aimed at expert fatigue engineers and students alike, but if anyone would like more detailed information on these topics consider doing one of the GlyphWorks training courses. More information about them can be found on the nCode website www.ncode.com or by contacting nCode.

How to access the GlyphWorks online documentation
The on-line documentation is available from within GlyphWorks. ? You can view the GlyphWorks manual by selecting from the menu ‘Help | On-line Manual’ . Alternatively you could view the entire collection of ICE-Flow manuals by selecting the Help submenu from the left hand side applications bar and select the Contents document. The document intro_help.pdf tells you how to locate any information in their 800 pages using Adobe Acrobat’s search feature.

?

?

6

Introduction

7

Tutorial 1—Data Import and Display in GlyphWorks

Data import and display

Learning objectives

GlyphWorks 2.2—Tutorial 1
8

This tutorial shows you how to operate the GlyphWorks 2.2 interface, and how to carry out essential tasks. The topics are:

Topic 1 – Learning about test names and channel names Topic 2 – Starting a GW project—an introduction to the GlyphWorks interface Topic 3 – Displaying data files in GlyphWorks—Glyphs and their properties Topic 4 – Carry out a simple rainflow analysis—using multiple glyphs in a flow Topic 5 – Exporting data from GlyphWorks to an ASCII CSV file for use in MS Excel Topic 6 – Importing data into GlyphWorks from an ASCII CSV file

Pre-requisites
ICE-flow GlyphWorks 2.2 must be installed on your PC The following tutorial files must be installed in your working directory. Vib01.dac to vib05.dac Strainrosette.csv

Tutorial 1—Data Import and Display in GlyphWorks

Topic 1
In this topic we learn about some standard file naming techniques that you can use to make your data analysis flow more easily. We introduce the concept of breaking

down data in the form of ‘Tests’ and ‘Channels’

GlyphWorks can handle all sorts of data from a simple single channel test to a complex multi-test multi-channel configuration. If you only measure small amounts of data then this functionality is probably irrelevant, but if you handle lots of measured data and need to process it automatically then this will save you hours of work.

extensometers for displacement and probably four strain gauges to calculate the torsion. We therefore need to measure 6 channels of simultaneous data as given below:Channel Number 01 02 03 04 05 06 Transducer Vertical Displacement Offside Wheel [mm] Vertical Displacement Nearside Wheel [mm] Torsion Strain Gauge 01 [?ε] Torsion Strain Gauge 02 [?ε] Torsion Strain Gauge 03 [?ε] Torsion Strain Gauge 04 [?ε]

Developing a Test Program
Suppose we are designing a new anti-roll bar (or sway bar) for a car. We need to make sure that the anti-roll bar works properly on all road surfaces, under all loading conditions from a single occupant to fully laden.

File Naming Convention:
GlyphWorks is designed to handle this sort of data automatically. If you give it a number of tests as input, it will process each of them in turn. Each test, however, can comprise a number of channels of simultaneous measurements and these are all processed simultaneously by GlyphWorks. This makes it really easy therefore to take the raw strain channels obtained above and run them through an equation to derive the torsion moment. GlyphWorks can automatically determine the channel numbering from the file naming convention or from the raw data acquisition files. If the data filenames ends with a pair of numbers then GlyphWorks assumes that these are the channel numbers as shown here. This …\TestName02.dac allows up to 99 channels per test. You can change the properties if you need to …\Test0102.dac use more channels, i.e. 999, 9999, etc...

To make sure we have covered all eventualities we develop a testing program that measures the response of our anti-roll bar for each ‘Event’ that it might see. Our test program might look something like this:Test Number 01 02 03 04 Surface Smooth straight road Slow curve Belgium block Etc... Road Speed 50 50 30 Weight % 50 50 50

Now we could choose to measure all the events in one long time signal but we often prefer to separate them into separate test files so it’s easier to see how the anti-roll bar responds to each event. We can either separate them as we measure them, starting a new file just before each test, or we can use GlyphWorks to separate them later.

Various tests shown as folders in a tree Expanding on the test will show each channel of measured data. You can pick 1 channel at a time or process all channels simultaneously.

What Are Channels?
Once we have determined the Tests we wish to perform over all the various events that our roll-bar is expected to see, we must determined what parameters we want to measure. In this example we might observe the vertical displacement at each wheel along with the torque in the bar. To do this we need two

9

Tutorial 1—Data Import and Display in GlyphWorks

Topic 2
In this topic we launch the GlyphWorks program and setup a project folder containing example data for later analysis. ? Create a new working directory on your computer and copy the training files here. Start the ICE-flow GlyphWorks program Select the working directory you have just created as your new project folder Click the GlyphWorks application button from the application toolbox
GlyphWorks offers you a choice of project folder. This is the location on the hard disk where all data pertaining to this project is stored. When GlyphWorks is running you can pick measured data from any input source; hard disk, network or Library data management system. The folder chosen here is used for GlyphWorks to store any working data you produce.

? ?

?

Application Toolbox: nCode GlyphWorks is a suite of engineering applications. Each application is launched from the toolbox positioned along the left hand edge of the screen. Applications are grouped logically to help you find the tools you need.

Available Data Tree: This tree lists all the available data files for analysis. GlyphWorks will automatically load all files in the project folder. The user can add additional files to the tree using the menu ‘File | Open Data Files…’ Data files listed here can originate from many sources including the hard disk, network or a Library data management system. They do not have to be stored locally in the project folder. The tree merely stores a link to the original data so you can find it quickly and easily.

Analysis History: This shows the history of things done in GlyphWorks so far and is used for undo and redo functions.

Glyph Palette: This palette contains the actual calculation Glyphs. These are dragged onto the workspace to construct the analysis process.

Property Editor: This shows the calculation properties of the selected Glyph. Each calculation Glyph has a number of properties depending on the type of calculation it performs. Most of these are set to sensible defaults by GlyphWorks; however, the user might want to change these for a particular calculation. You can make the property editor larger by dragging it’s frame. Alternatively, the property editor is used so regularly that we can view a full sized version by right clicking on the required Glyph and selecting Properties from the menu. (See later examples)

Diagnostics and Progress: This shows a detailed account of the calculation progress so far and any errors or warnings found in the calculation process. It helps the user fix any problems quickly and determine how long a complicated calculation is likely to take.

Analysis Workspace: This is the workspace where you create your analysis process. Analysis processes are created by dragging calculation Glyphs from the Glyph Palette and dropping them onto the workspace. These are then linked by Pipes to create a calculation flow.

10

Tutorial 1—Data Import and Display in GlyphWorks

Available Data, Tree or Table View: The available data pane can be viewed as either a tree or a table. The tree resembles a standard directory tree, only GlyphWorks allows data to be re-grouped in different ways to improve searching. The tree branches can be re-grouped by right clicking on the tree. The following options are available: 1. 2. 3. Type of data, followed by test name followed by channel Type of data, followed by channel followed by test Disk directory name, followed by test followed by channel

The available data can also be displayed in tabular format as shown here. In this view, the data can be sorted in different ways by clicking on the appropriate column title. The tabular view shows more information than the tree view; however, most people find the tree view clearer.

Importing Files from Other Locations
Additional data can be added to the available data pane using the pull-down menu option ‘File \ Open Data Files…’ The dialog shown below opens allowing you to change directories, browse the data in the directory and select which data you wish to import. Data is not physically copied into the working folder as the data tree merely contains links to the actual file. This is particularly useful if you need to use standard company data from a central server. It would be most uneconomical to keep copying this data locally, so this method allows data to be stored once but accessed as if it were local to your work.

You can select the type of data file to display and enter an optional search key if required: e.g. vib*.dac Press the ‘Scan Now’ button to see all the files matching your search key.

This button scans all data files in the chosen directory. It then displays the type of data and the number of channels within each file so we can select those channels we require without having to continually handle huge sets of data.

The Available data is listed here. This shows the test data file name, the type of data present and the number of channels. Data is loaded by selecting the required tests and clicking the ‘Add Tests’ buttons adjacent.

You can import all data channels for the chosen tests or by clicking on ‘Expand Channels’, individual channels can be selected. The chosen channels are denoted by a green square.

The selected tests and channels are shown in this pane. These are imported into GlyphWorks by clicking the ‘Add to File List’ button. These buttons are used to add or remove all tests.

These buttons are used to add and remove the selected tests.

11

Tutorial 1—Data Import and Display in GlyphWorks

Topic 3
In this topic we look at some typical measured stress data within GlyphWorks.

You can drag data directly from the data tree onto the workspace. GlyphWorks automatically recognises the type of data and provides the appropriate data import Glyph. The Time Series Glyph has a preview ‘Display’ option that provides an interactive plot of the data on the Glyph. This can be viewed full screen by pressing the Glyph maximize button in the top right hand corner of the Glyph.

?

From the Available data tree, expand the ‘Time Series’ branch and find the data called ‘vib(dac)’. Expand ‘vib(dac)’ to see all 5 data channels. Drag ‘vib(dac)’ onto the work space to create a new Time Series input Glyph and click the Display option to see the plots. Press the Glyph Maximize button to make the display full screen

? ?

?

?

Use the mouse and Tool box to zoom in and out of the plots. Notice the horizontal slider bar over the plot and see how you can scroll through the data by moving it. Experiment with Cursor coordinates option on the Tool bar. Click on the plot and read various data values. Use the ‘Next / Previous Channel’ buttons to navigate through all the channels. Notice there are 5 channels in this data. Try using overlay and cross plots from the Tool bar. Use the properties option to display all 5 channels simultaneously. Zoom-in on a small region of data and exYou can zoom in on a block of data by simply clicking either side of the area you are interested in or by dragging the mouse over a window of data. All channels will automatically zoom in on the time interval chosen.

?

?

? ? ?

periment with the different line styles from the properties options. ? When you are quite comfortable with the graphics options go to the next task.

Most commonly used graphical display options are available from the tool bar shown below. This will allow you to quickly zoom on the full plot or zoom in and out in stages. You can scroll through your data and change the axes between log and linear. You can also scroll through multiple data channels using the ‘Next / Previous channel’ buttons.

Full plot

Zoom in-out on X and Y axes

Switch between Log and linear axes

Graphical Display Tool
Round Y axis to nearest whole number

Full X or Y

12

Tutorial 1—Data Import and Display in GlyphWorks
Frequently used graphical options are shown on the tool bar while the remaining options can be found in the Glyph properties panel by right clicking on the plot and selecting ‘Properties…’ from the menu. The Plotting options are listed under the ’XY Graph’ tab as shown below. Axis Limits: These allow you to vary the format and limits of the axes. The format can be Log or Linear, and the limits can be set precisely by entering the desired range.

The properties are all listed under generalized headings shown in the left hand tree menu. Click the heading required and the available properties for that appear in the right hand pane. Press the ’OK’ button to apply the properties.

Data Lines: This allows you to change the appearance of the plot. You can change colour, pen thickness, the style of line and the shape of any markers used. Markers are used to highlight the data making it easier to distinguish between channels. Markers do not represent measured points on the data as there are usually too many to plot neatly. However, you can choose the ‘Points’ option to show the actual measured points if required.

Style Options: These let you edit the appearance of the graphical plots. Channels can be shown separately, overlaid on the same axes, or as a cross plot of two channels showing the value of one on the X axis and the other on the Y axis. You can display up to 8 channel plots in one display; although the default is usually set to 4. You toggle between successive channels using the ‘Next / Previous Channel’ buttons. Various axis, grid and colour options are also available from this form.

Labels: To change the label headings, fonts and the style of numbering.

Separate, overlay or cross-plot multiple channels

Change font size Refresh Show cursor coordinates

Next / previous channel

Channels can be incremented / decremented in order by one channel at a time or by groups of channels. So if channels 1-4 are currently showing, the next group would be 5-8.

Cursor tracking of data points

Select or deselect all data points

Next / previous section of data

Scroll bar on/off

13

Tutorial 1—Data Import and Display in GlyphWorks

Topic 4
In this topic we set up a very simple rainflow analysis and view the results in a 3D histogram. We learn how to use analysis Glyphs with Pipes to channel data through a calculation. ? Using the data you loaded in the previous task, drag the ‘Rainflow Cycle Counting’ Glyph onto the workspace and connect this to the ’Time Series Glyph’ from the previous topic. Drag a ‘Histogram Display’ Glyph on to the workspace and connect this to the output of your ‘Rainflow Cycle Counting’ Glyph. Right click on the Rainflow Glyph and select ‘Properties…’ from the menu. Look at the various options available. Right click on any option and select ‘Help on Property…’ from the menu to see what this option means. Don’t worry if you don’t understand something at this stage, we’ll consider the Rainflow Glyph in greater detail later. Run the process to see the rainflow matrices Maximize the Histogram Display and change its properties to show a single channel. Rotate the 3D histogram by dragging the mouse and see how it follows the mouse. Use the toolbar buttons to see the different plotting options available.

?

Using Glyphs and Pipes: Glyphs are pictorial representations of analysis components. There are input Glyphs, output Glyphs and various function Glyphs. Input Glyphs provide a source of data, like a link to a time history file; Output Glyphs provide a sink for the data, either by graphically displaying the data or by writing to the file system, and Function Glyphs take raw data in and pass process data out. Glyphs are linked together using Pipes connected to pads on the Glyphs themselves. Data flows into a Glyph through the left hand pad and flows out through the right hand pad. Input Glyphs only have pads on the right and output glyphs only have pads on the left. Pads are colour coded to represent the type of data that flows through them, The colour codes are shown below. Pipes can only be used to connect like coloured pads. (Note Grey pads can take any data.) To link two pads with a pipe simply click on one of the pads to be joined, GlyphWorks will then highlight all the compatible pads on the other Glyphs. Simply move the mouse over the other pad and click on it. Pipes can be removed by right clicking on the pipe and selecting ‘Disconnect’ from the menu. Alternatively, pipes are automatically removed if you delete a Glyph. Quick Tip: You can drag a Glyph from the Glyph palette and drop it on the pad of another Glyph. This will automatically create a pipe between them if they are compatible.

? ?

Maximize Plot

? ?

Input data flows into the Glyph Input pads on the left

Output data flows out of the Glyph Output pads on the right

Blue Red Green Grey

Time Series Data Histogram Data Metadata Any Data Type

All connectors are colour coded to represent the type of data that passes through them

Colour conventions used for Pads

14

Tutorial 1—Data Import and Display in GlyphWorks

3D Histogram Display
The rainflow results are shown using the 3D histogram display Glyph. This can display up to 8 interactive histograms in one display. The Glyph automatically configures the display to show up to 4 plots for multi-channel data. The number of plots shown can be changed using the Advanced Properties options (right click). Maximize the plot by clicking on the button in the top right hand corner of the Glyph. A typical plot is shown below. It can be rotated by clicking and dragging the mouse. The most common options are provided on the Tool bar shown at the bottom of the page. If you get a little lost after dragging, press the ‘Isometric’ button to return to normal. Another useful option is the ‘Top’ view option. This makes for a particularly pleasing graph when used in conjunction with the ‘Surface’ option. Advanced Properties: These options are available by right clicking on the plot and selecting the ‘Properties…’ option from the menu. The advanced property form is similar to that seen earlier for the Time History Display; however, you’ll notice some features specific to 3D plots located on the ’Styles—Plots’ option shown below.

To interactively Rotate the Graph, simply left click on the plot and drag the mouse in the direction required. The number of displays shown on each plot can be changed from this option form. The rotation angle and zoom can also be selected here. This is changed interactively by dragging the mouse over the plot; however, more control is offered by allowing the user to type in the numerical angle required. The graph can be shown using histogram towers or as a surface plot while colours can be used in different ways to enhance the display.

Top view of a Surface plot

Hint: If you want to clear the input files and look at another set of data then simply right click on the TSInput Glyph and select ‘Remove Tests’ from the menu. You can now drag another set of data from the Available data tree and drop them on the TSInput Glyph. You can also drop several tests on the TSInput Glyph at once; however, you can only view one test at a time.

Viewing Options: Isometric, top, left & right Full Plot

Change font size

Next / previous channel

Channels can be incremented / decremented in order by one channel at a time or by groups of channels. So if channels 1-4 are currently showing, the next group would be 5-8. The mode of increment / decrement is chosen by holding the button down and then picking from the menu shown.

View as block histogram or surface plot

Scale all plots to the same axis limits

Select or deselect all data points

15

Tutorial 1—Data Import and Display in GlyphWorks

Binary Data Output:
Output data from GlyphWorks can be written in a file for later use by other programs. This might be report generation using ICE-flow Studio, test rig drive signals, input to FE-Fatigue analysis or Multibody simulation, etc. Binary data is output using the appropriate output Glyphs; these being Time Series, Multi-column or Histogram data.

Topic 5
In this topic we export Data to both Binary and ASCII output files. ? Using the GlyphWorks process created in the topic 3, add the Histogram Output Glyph and rerun the process. Edit the GlyphWorks process and add a Data Value Display Glyph. Export the histogram to a CSV file and look at this file in MS Excel. Edit the GlyphWorks process and view the numerical values of all channels in the input time series using the Value Display Glyph.

?

?

By default, the Output Glyph uses the same filename as the input but appends ‘_out’ to the end of the filename. You can change the output name via the Glyph Properties. Right click on the Glyph and select ‘Properties’ to get the following options: Select type of Binary format to use: either DAC or S3 format.

Meta data can also be stored with the plot. Metadata contains information on the settings used in the analysis as well as statistical quantities and titles and units.

GlyphWorks can be instructed to automatically overwrite existing files or Ask for user confirmation first.

The new data can be added automatically to the Data tree if required; otherwise the user must press the data refresh button to load the new data. The output filename is automatically assigned based on the following options: 1. Use existing filename with a given suffix 2. Use existing filename with a given prefix 3. Use a new filename 4. Use the existing filename but write to another directory

16

Tutorial 1—Data Import and Display in GlyphWorks

ASCII Data Output:
Data can also be written to ASCII files for use by programs that cannot read Binary files; these include FE analysis packages, Spreadsheets, Mathcad, Matlab, etc. ASCII data is output using the ‘Data Values Display’ Glyph. This can be used to give a numerical view of the data on screen but is also able to output to CSV (Comma Separated Values) files by clicking the ‘Export’ button on the Glyph. The user is prompted for a filename. The Data Value Glyph does not show all data points by default but instead shows 1 in 10 points. This default can be over-ridden using the right-click ‘Properties…’ option.

Enter the channels to export. Channel numbers can be written as a comma separated list or using a range of channels expressed in the form; ‘1-3’ , etc...

Select the preferred bin labels for histogram data. Cells can be labelled using the centre value or the minimum value covered by the bin.

Enter the number of significant figures to use for each value. Internally GlyphWorks maintains a precision of approximately 12 significant figures but this is probably excessive and can result in large file sizes.

This only applies to Time Series and Multicolumn data. Data might not be required for all values in a time series so this allows the user to select to output every 10th point or every 1 second for example.

Remember you can view help on any of these options by right-clicking on the option and selecting ‘Help on Property…’ from the menu.

17

Tutorial 1—Data Import and Display in GlyphWorks
Time Series files have no separate time axis. The time for each point is calculated knowing the starting time (Base Time) and the constant Sample Rate. Multi-column files have a separate time axis that doesn’t have to be measured at a constant sample rate and may also contain a date column for long term measured data.

Topic 6
In this topic we import data from a typical ASCII file using the ASCII Translation tool.

?

Click the ASCII Translate application in the application toolbox.

Enter ASCII filename for translation. This can contain any format of ASCII data including CSV (Comma Separated Values), Space separated, tab separated, etc.

?

Browse for the file ‘StrainRosette.csv’ and select the ‘Convert to Time Series’ option. Press Next to continue to the next wizard form. Stretch the form so you can see a number of lines in the data preview window. Select ‘Number of header lines = 12’, Line number for channel titles = 5, Number of channels = 3, Line number for units = 7, Fields = Comma Separated. Press Next to continue to the next wizard form. Enter Sample Rate = 409.6 and press Translate Button to complete the translation. View the data in GlyphWorks as Tutorial 1.3 Header lines are included at the top of most ASCII files to describe the data present and provide other information like the sample rate, units, etc. ASCII Translate needs to know how many header lines are present so it doesn’t confuse these with the actual measured data. It can also use some of this data to automatically assign units and titles to the plots.

?

? ?

Previously saved ‘Setup Files’ can be loaded here to preset the Wizard values and automate the translation. The option to save a setup file is offered on the final wizard form.

There are many different ASCII based file formats around and it’s difficult to create a generic translator that is able to automatically discern one format from another. Typical formats include: ASC, TXT, CSV (Comma Separated Values), XML, etc... ASCII Translate is an interactive Wizard that guides you through the translation and allows you to see the data before you commit to the translation. If you have a lot of data in the same format you can save your translation settings and run them automatically. The CSV file shown here contains measured Time Series strain data from three legs of a strain gauge rosette. Time Series data is recorded at a constant sample rate, in this case 409.6Hz as given in the file header. You’ll notice that Time Series data contains no separate time column as the time of any point can be evaluated using the formula below:

tn = t0 + ? f ? n

Where: tn = time of the nth data point t0 = Time Base (starting time usually 0sec) ?f = Sample rate

There are many ways of formatting ASCII files and it is unfeasible to detect the format automatically. ASCII Translate provides a data preview window so the user can choose the formatting options from the ’Fields’ and see whether these are satisfactory.

18

Tutorial 1—Data Import and Display in GlyphWorks

Time Series data does not contain a separate time column so the user must enter the Sample rate and time base (starting time). The actual time values for each point can then be calculated.

The user can change the label and units used for the X-axis. By default these are assumed to be Time measured in Seconds.

Translating Multi-column data:
Multi-column data always requires a time column and optionally a date column where data is measured over a long period of time. The translation process is the same as for Time Series data except a Wizard form is offered for you to enter whether the data contains date as well as time, and the date formatting used. You must also enter which columns pertain to the time and date.

ASCII Translate produces a binary output file that can be used directly by GlyphWorks. GlyphWorks can read a number of binary formats and the user can select the preferred format and filename. The filename defaults to the original ASCII filename while the format defaults to the GlyphWorks standard format. The choice of format largely depends on the user’s requirements and is discussed in the information box opposite. At the end of the translation a summary log is produced so you can make sure everything went as expected. You also have the option to Save the Setup file so you can re-run the conversion quickly on other similar files.

Binary V’s ASCII Files
ASCII files are very popular because they can be read using a simple text editor, like MS Notepad?, and are universal between Operating Systems. Binary files, in contrast, are very specific to the computer platform they were created on. Not withstanding these benefits, binary files are generally preferred when measuring and analysing engineering data. Binary files are typically half the size of equivalent ASCII files. This is because numbers are stored in ASCII using 1 byte per digit, therefore requiring 8 bytes for 8 significant figures. Binary files, however, use a more precise storage mechanism enabling the same degree of numerical precision to be achieved using only half the memory. Common Types of Binary File DAC files are compact, versatile and provide accuracy to approximately 8 significant figures. All channels are stored in separate files which can cause problems managing filenames. RPC files can contain multiple channels within the same file; however, this is stored in a multiplexed format making access rather slow and preventing direct manipulation of the length of the data. RPC files store data as integer numbers and although this reduces the file size it also reduces the numerical precision to approximately 5-6 significant figures. S3 files can also contain multiple channels and these are stored in a non-multiplexed format allowing versatility and speed. Data is usually accurate to approximately 8 significant figures; however, multi-channel data is stored with a precision of approximately 12 significant figures. This is the preferred format.

19

Tutorial 2—Basic signal analysis in GlyphWorks

Basic signal analysis

Learning objectives
This tutorial introduces you to signal analysis in GlyphWorks with a particular emphasis on detecting anomalies (or errors) in the data. We will use the Rainflow glyph, along with the Amplitude Probability Distribution, the Frequency Analysis glyph and the Statistical Analysis glyph.

GlyphWorks 2.2—Tutorial 2
20

We will use these Glyphs to identify anomalies like; spikes, electrical line interference, clipped data, signal drift, inadequate signal length, aliasing, etc. The following topics are considered: Topic 1 – Eyeballing your data—looking at the time signals Topic 2 –Time at level analysis— use GlyphWorks to identify amplitude anomalies in a signal Topic 3 – Carry out a rainflow analysis—use GlyphWorks to identify spikes in a signal and determine whether the duration of data is statistically sufficient Topic 4 – Frequency analysis—use GlyphWorks to identify electrical line interference in a data file Topic 5 – Statistical analysis

Pre-requisites
You must have completed tutorial 1.
The following tutorial files must be installed in your working directory.

Strain.dac Clipped.dac Drifting.dac Spiked.dac Mains.dac

Tutorial 2—Basic signal analysis in GlyphWorks

Topic 1 – Eyeballing your data
In this topic we will use the TSInput Glyph to browse through the data files we’ll be considering throughout this tutorial chapter. We’ll learn how to carry out a methodical scan of the data and look for problems like data drop-out, drifting, clipping and poor numerical resolution. ? ? Start a new GlyphWorks worksheet by clicking on the menu item ‘File | New Process’ From the available data tree, drag the time signal file strain.dac on to the workspace and notice how GlyphWorks automatically identifies the type of data and provides the appropriate input Glyph. Click on the Display option and maximize the plot using the button

?

Zoom-in on the data and scan through it using the slider bar at the top of the plot. What do you think about the quality? When you have finished, shrink the plot back again using button and then right click on the TSInput Glyph and select ‘Remove Tests’ to clear the selection. You’re now free to pick up another file and drop it on the Glyph. You could of course create lots of separate Glyphs by dragging and dropping each file on to the workspace but it can get a bit messy that way.

?

?

?

Repeat the exercise by looking at ‘clipped.dac’, spiked.dac, ‘drifting.dac’ and mains.dac. Remember to zoom-in at different levels because some anomalies can only be seen at certain scales.

Spiked.dac
Maximize Display

Check here to display the time signal

Strain.dac
If we zoom-in on strain.dac you’ll notice that the data seems poorly resolved. There are too few data points recorded to confidently determine whether we have all the required peak values. This is a good example of undersampled data. We need to remeasure this data at a higher sample rate. We will see later how this information is obtained more readily using the Frequency Spectrum glyph.
Is this plateaux real or have we missed the peak value here? Are there sufficient data points to accurately record this strain cycle?

As it’s filename suggests, this data contains electrical spikes. These are usually characterized by a single freak point of great amplitude. Spikes are more clearly discerned by the Rainflow histogram that we’ll discuss later in this tutorial.

Drifting.dac
This file contains a steady drifting of the mean value of the signal. In this case it arises because the strain gauge is expanding at a different rate to the component as the temperature increases. We will see this later in the Amplitude Distribution as well. All these anomalies and many more can be quickly detected using the techniques discussed in the next few pages. Read on to see how GlyphWorks can help.

21

Tutorial 2—Basic signal analysis in GlyphWorks

Topic 2 –Time at level analysis
In this topic we discuss the ‘Time at Level’ and ‘Amplitude Probability Distributions (APD)’ with particular emphasis on using them to quickly identify signs of ‘clipping’, ‘spikes’ and signal drift in the recorded data. To run the analysis we need to create a simple calculation process similar to the one illustrated below. ? ? From the available data tree, drag the time signal file strain.dac on to the workspace. From the Function palette, drag the ’Amplitude Distribution’ Glyph on to the workspace and attach it to the input file Glyph. We now need some way to plot the data, so drag the ’XYDisplay’ Glyph from the Display palette and attach it to the Amplitude Glyph output as shown in the illustration below. Run the analysis and look at the shape of plot produced. This is an example of GOOD data.

Clipped Data
? From the available data tree, drag the time signal file clipped.dac on to the existing TSInput Glyph and rerun the analysis. Click on the XYDisplay glyph then click the ‘Full Plot’ and ‘Zoom-out’ buttons to reset the axes and better see the clipped data.

?

Drifting Data
? From the available data tree, drag the time series file drifting.dac on to the TSInput Glyph and rerun the analysis. Click the XYDisplay Glyph and then click the ‘Full Plot’ and ‘Zoom-out’ buttons. You can now save the workflow you have just created to a process file by using ‘File | Save Process’, and giving it a name. You can then use this process again without having to set it up from scratch.

?

?

Save your Workflow
?

?

Example of Good Data

Example of ‘Clipped’ Data

The above illustration is typical of GOOD data. In this particular case we notice a symmetric ‘Bell’ shaped curve following the Gaussian Normal distribution. We’d expect to see this shaped curve for random vibration signals.

Clipped data is where the real data values exceed the full scale limits of the calibrated acquisition unit. For example: suppose you configure your data acquisition unit to measure strain in the range –1000?ε to +1000?ε. Any values outside this range are ‘clipped’, so instead of recording the actual values the acquisition unit gives a false reading. For values greater than 1000?ε the unit will always read exactly 1000?ε, and similarly for values less than –1000?ε

22

Tutorial 2—Basic signal analysis in GlyphWorks

the unit will return values of exactly –1000?ε. This causes a sudden increase in the number of 1000?ε and –1000?ε records logged which are easily seen in the Amplitude Distribution. It is very common to miss clipped data if you only refer to the time signal plots.

Example of ‘Drifting’ Data

temperature changes during the measurement. If temperature variations are likely to be significant during the test then it is advisable to use strain gauges that have the same coefficient of thermal expansion as that of the component being tested. Otherwise you’ll have to correct for the drift later in GlyphWorks. Accelerometers data can also exhibit drift and this again can be corrected using GlyphWorks. Of course, some drift might actually be due to real artefacts of the data so it’s always important to assess what’s real from what’s anomalous. You can compare the measured data with other data measured previously or try to think what happened during the test that might contribute to the drifting result that you can see.

Long term drift in data is also a common problem. Drift can occur in strain gauge data through effects such as

What is ‘Time at Level’ and ’Amplitude Probability’ Analysis? Time at level analysis is illustrated in the diagram below. The amplitude of the time signal is split up into a number of ranges called bins. The duration over which the time signal occupies each bin is calculated and then presented in the form of a bar chart. Time at level analysis is useful for determining the statistical amplitude content of a signal and can be used to detect anomalies such as signal drift, spikes, clipping, etc.

Although time at level analysis is useful it is more common to represent the data as an amplitude probability distribution function (PDF). This is simply a way of ‘normalising’ the time at level plot so it does not change shape when the length of time history changes or the number of bins is changed. To do this we plot the values as a histogram instead of a bar chart. In histogram format the area of a column represents the time at level instead of the height. This means that the plot does not change shape as we increase the number of bins. Next we divide the time at level by the total time of the signal, therefore it always remains the same height irrespective of signal length. We can now compare the Amplitude PDFs for any signal irrespective of signal length or bin resolution.
Time at level Seconds

0.04

0.06

0.08

0.12

0.14

0.1

Time spent at level

0.2

0.4

0.6 Time Seconds

0.8

1

The time spent at each amplitude level is calculated and presented in a bar chart.

23

Tutorial 2—Basic signal analysis in GlyphWorks

Topic 3 –Rainflow analysis
In this topic we discuss Rainflow Cycle Extraction and show how this can be used to identify ‘spiked’ data and ‘short data’ where the sample length is insufficient for high confidence results. To run the analysis we need to create a simple calculation process similar to the one illustrated below. You can create a new process if you wish, or you can drag the new glyphs on to your existing worksheet and build up a more complex analysis that considers both Time at Level and Rainflow. ? From the available data tree, drag the time signal file strain.dac on to the workspace or alternatively drop it on your existing TSInput Glyph. From the Function palette, drag the ?

’Rainflow’ Glyph on to the workspace and attach it to the input file Glyph. We now need some way to plot the data, so drag the ’Histogram Display’ Glyph from the Display palette and attach it to the Rainflow Glyph output as shown below. Run the analysis and look at the shape of plot produced. This is an example of SHORT data. Repeat the analysis using ’spiked.dac’, ‘Clipped.dac’ and ‘drifting.dac’

?

Other tasks
?

?

button and changing the viewing angle with the other buttons as follows:Surface plot XY axes (Top) YZ axes (Right)

Tower plot

Isometric

XZ axes (Left)

Example of Short Data
Look at the Rainflow histogram of the data in ‘strain.dac’. Have a close look at the tip of the plot. These high range cycles create exponentially more damage than the lower range cycles and effectively dominate the fatigue damage. Look at the Top view and the Left view and notice how sparse the data is in this region. This plot would suggest that most of the damage would be attributed to only a few cycles. We need to confirm whether this is really representative. If we were calculating the fatigue damage on an anti-roll bar for example, then these data could be representative as most of the damage could be attributed to a few high load events like pot hole or curb strikes; how-

The simple Rainflow analysis process is illustrated above. The 3D rainflow histogram shows cycle range on the xaxis, cycle mean on the y-axis and number of cycles on the z-axis. You can familiarise yourself with the characteristics of a rainflow histogram by maximising the histogram with the

24

Tutorial 2—Basic signal analysis in GlyphWorks

ever, if this data is from a vibration source, like a component on the engine for example, then the sample length is probably too short to calculate any result with confidence.

is very important that we effectively identify the high amplitude noise and remove it before proceeding with the analysis.

Example of Clipped Data
Clipping will artificially curtail the data to a lower range of values than is really present and so clipped data often appears as a concentration of cycles at the extreme range as shown below. However, this is not always the case because some signals, for example might be clipped at only the maximum values and therefore yield varying cycle ranges that might be overlooked. The ‘Time at Level; or ‘Amplitude Probability Distribution’ is still the preferred identifier.

Example of Spiked Data
Using the data file ‘spiked.dac’ we see a slightly different shaped plot. Here we notice that most of the data is concentrated in the low range area and a few points are scattered in the extreme ranges. If we were to consider a fatigue analysis on this data we’d observe that nearly all the damage arose through only one very large amplitude cycle. We would have to confirm whether this is really representative and statistically significant.

Example of Drifting Data
Drifting data is usually identified by a skewness in the mean distribution (Right view) in a similar fashion to that seen in the Time at Level plot. Compare the Rainflow and Time at Level distributions for the data in drifting.dac and notice the similarity.

A more likely explanation is the presence of ‘spikes’ in the data. These are very common in strain gauge signals where the data, usually measured in milli-volts, can easily be corrupted by external electrical noise. In this case it What is Rainflow analysis? Rainflow cycle counting is the cornerstone of fatigue analysis. A quantity of fatigue damaging energy is released when the stress is cycled into tension and back again. Rainflow cycle counting is the method used to extract these cycles from a time signal. As the amplitude of a cycle increases, its fatigue damage content raises exponentially. We will speak more about Rainflow Cycle counting in the later Fatigue Tutorials. The subject is covered in

detail in nCode’s training courses. In addition to fatigue analysis, Rainflow cycle counting can also prove valuable as a quick validity check on time signal data. It produces a 3D histogram that resembles the shape of an arrowhead for good data. The diagram can be used to spot spikes, check that the sample length was sufficiently long, and examine the amplitude PDF all from one diagram!

25

Tutorial 2—Basic signal analysis in GlyphWorks

Topic 4–Frequency analysis
In this topic we discuss frequency analysis and show how it can be used to identify electrical line interference and the likelihood of aliasing. To run the analysis we need to create a simple calculation process similar to the one illustrated below. You can create a new process if you wish, or you can drag the new glyphs on to your existing worksheet ? From the available data tree, drag the time signal file mains.dac on to the workspace or alternatively drop it on your existing TSInput Glyph.

?

From the Function palette, drag the ’Frequency Spectrum’ Glyph on to the worksheet and attach it to the input file Glyph. We now need some way to plot the data, so drag the ’XY Display’ Glyph from the Display palette and attach it to the Frequency Spectrum Glyph output as shown below. Run the analysis and look at the shape of plot produced. This is an example of electrical line interference data. Repeat the analysis using ‘strain.dac’ and consider whether this data could be aliased.

?

?

?

you are measuring. This is illustrated in the figure below. In this example the measured signal is that of a unit amplitude sinusoidal wave of frequency of 10Hz. Sampled at 100Hz in the red plot, we see a good frequency representation and only a slight loss of amplitude resolution with the maximum amplitude reading 0.95 instead of 1.0. As a general rule we are taught to over sample by at least a factor of 10x the maximum frequency in the signal to ensure satisfactory amplitude resolution. The slight reduction in amplitude arises because we didn’t sample at the exact moment the sine wave reached its zenith.

Example of Electrical Line Interference
The simple Frequency Analysis process is shown above. Frequency in Hz is given along the x-axis while the mean square amplitude content in each frequency band is given on the y-axis. We can observe a spike located at 50Hz which incidentally coincides with the electrical line frequency used throughout the UK. It is quite common to see electrical line interference in lab based data acquisition units arising through an earthing problem. This type of anomaly can be easily rectified by GlyphWorks provided the maximum real frequency is well below the electrical line frequency. If the real frequencies encroaches on the line frequency then it’s hard to distinguish the real data from the anomalous. We’ll discuss this topic further in a later chapter.

Example of Aliased Date
Aliasing occurs if you select a sampling frequency that is very low compared with the frequency range of the signal

26

Tutorial 2—Basic signal analysis in GlyphWorks

The blue plot shows what happens when we sample at only 25Hz (2.5x maximum). We still see frequency and phase information but the amplitude is now lost. Reducing the sample rate to only 12.5Hz results in the green plot showing now an incorrect frequency response too. This frequency ’Aliasing’ error will occur when the sample rate is reduced below a factor of 2x the maximum frequency of the signal; this is known as the ‘Nyquist’ limit. We can use the Frequency Analysis plot to check whether aliasing is likely to be a problem with our data. Switch the y-axis to ‘log-scale’ and look for the highest frequency of the recorded data before it disappears into very low level noise. Now compare this with the maximum frequency on the x-axis (the ‘Nyquist’ limit) and make sure there’s a factor of 5x between the two. If there is then you’ll probably

be fine, if there isn’t then make sure an ‘anti-aliasing’ filter was used during the original data acquisition, otherwise your data could be seriously compromised!

This data could be under-sampled and might show aliasing errors!

What is Frequency analysis? Frequency analysis is founded on the principles postulated by the French mathematician J. Fourier. He reasoned that all periodic time signals could be broken down into a number of sinusoidal waves of various frequency, amplitude and phase. When all the waves were later added together they would recreate the original time signal. Today we employ the ‘Fast Fourier Transform’ (FFT) algorithm to carry out this frequency decomposition and this is provided in GlyphWorks. Frequency analysis data is typically presented in graphical form as a ‘Power Spectral Density Function (PSD)’. Essentially a PSD displays the amplitude of each sinusoidal wave of a particular frequency. Frequency is given on the x-axis. The mean squared amplitude of a sinusoidal wave at any frequency can be determined by finding the area under the PSD over that frequency range. So, if you want to find the mean square amplitude of a 4Hz harmonic for example, you simply calculate the area under the PSD between say 3.5-4.5Hz. The approximate amplitude of a sinusoidal component can be found from the equation:

Amplitude ≈ 2 ? mean square amplitude
Sine wave
Time history
0.5

Broad band process
PSD Time history
5 1

PSD



0 0.5

5

0.5 0 0 5 10 5 5

This figure shows examples of four different PSDs. The PSD of a sine wave is simply a spike centred at the frequency of the sine wave. The area under the spike represents the mean square amplitude of the sine wave.
10

frequency Hz

0

5

frequency Hz

Narrow band process
2 1 10 0.5 0 0 5 10 10

White noise process
2

A ‘Narrow band’ process is one that covers only a narrow range of frequencies. This is easily seen in the PSD. A ‘Broad band’ process is one that covers a wide range of frequencies. This might consist of a single, wide spike or a number of distinct spikes as shown in the diagram. A ‘White noise’ process is an ideal signal with equal amplitude content for all frequencies. It is com-

0 2

5

5

1

frequency Hz

0

5

10

frequency Hz

monly used when preparing drive signals for tests. PSDs are useful for detecting resonance in components, aliasing in the data, frequency interference, etc. This subject is covered in more detail in nCode’s training courses.

27

Tutorial 2—Basic signal analysis in GlyphWorks

Topic 5–Statistical Analysis
In this topic we discuss statistical analysis and show how it can be used to very quickly ascertain whether data is good or bad and whether it is comparable to that measured before. To run the analysis we need to create a simple calculation process similar to the one illustrated below. You can create a new process if you wish, or you can drag the new glyphs on to your existing worksheet ? From the available data tree, drag the time signal file strain.dac on to the workspace or ?

alternatively drop it on your existing TSInput Glyph. From the Function palette, drag the ’Statistics’ Glyph on to the worksheet and attach it to the input file Glyph. We now need some way to plot the data, so drag the ’Meta Data Display’ Glyph from the Display palette and attach it to the Statistics Glyph output as shown below. Run the process and in the Meta Data Display glyph expand the branch ‘Channe1 Metadata’ and ‘Statistics1_Results’

?

?

spikes. This comparison is usually done visually by the engineer after the data has been downloaded to his desktop computer and it is often too late to re-measure anything that might have gone wrong. Ideally we need some very simple numerical values that we can quickly compare whilst performing the test. Basic statistical analysis is ideally suited for this role. GlyphWorks will calculate all the most commonly used statistical properties of the data. The information panels on the next two pages have been prepared to remind us what these properties refer to and how we can use them.

Statistical analysis is a very revealing way with which you can compare measured data with measurements taken previously during other studies. Most engineers already know what the measured data should look like based on previous experience. We typically look at the overall range and mean and also compare the amplitude (Time at Level) and frequency content of the data and look for

28

Tutorial 2—Basic signal analysis in GlyphWorks

Statistical analysis Statistical analysis is concerned with reducing a long time signal into a few numerical values that describe its characteristics. These are ideal when we need to quickly assess whether data is good or bad. Statistical properties can reveal anomalies like spikes, drift, clipping and an inadequate sample length. The most common statistical quantities are based on the amplitude PDF discussed earlier in this tutorial. They describe the shape of the PDF in terms of its central tendency, its spread, its symmetry and its area profile. These are illustrated in the diagrams below.
Measures of Central Tendency: give an indication of the mid value in the PDF Median: the mid value with an equal number of points above and below (11)
y 30

Mean: the average value or the centre of area of the PDF about the x axis (12)

y =

1 N

? ∑ yn
n =1

N

20

10

Mode: the value that occurs most often, or the location of the peak of the PDF (8)

0 Amplitude PDF

Measures of Spread: give an indication of the width or range of values in the PDF. Used to quickly identify problematic data Range = ymax - ymin Shows whole range of data, useful for assessing calibration of data acquisition equipment
y

30

Mean deviation: Average deviation from the mean value, or the centre of area about the mean Variance: Similar to mean deviation but take square of deviation from the mean rather than using the modulus operator. This is a smooth mathematical function and is preferable to the modulus although it is less intuitive.
0 Amplitude PDF

s = ? ∑ yn ? y
1 N n=1 N

N

20

10

σ = ? ∑ ( yn ? y )2
2 1 N n =1

Standard deviation σ: Take square root of variance to make dimensions consistent with the input units. Again this is less intuitive than the mean deviation but is the most commonly used measure of spread.

29

Tutorial 2—Basic signal analysis in GlyphWorks

Combined measures of central tendency and spread: used to quickly identify problematic data Mean Square MS: Defined as the 2nd moment of area of the PDF about the x axis this measure is also known as the ‘intensity’ of the signal. It represents both central tenN 2 2 dency and spread and is a very useful 1 y = N? yn parameter for quickly checking a n =1 measured signal to ensure it is good. Any change in mean or range will be reflected in this parameter. Root Mean Square (RMS): Take the square root of the MS to make the dimensions consistent with the input units. This measure is also used as a simple quality check on the measured data as any change in mean or range will be reflected in this parameter.



Measure of Symmetry Skewness: Defined as the 3rd moment of area about the mean, this measure is useful for assessing the degree of asymmetry of the PDF about the mean. This is 1 N useful for identifying sig3 ? yn ? y nal drift, cyclic hardening, skewness = 3 N ?σ n=1 etc.

Mean

∑(

)

Skewness

-ve

Zero

+ve

Measures of Area Profile Kurtosis: Defined as the 4th moment of area about the mean, this measure is useful for assessing the likelihood of extreme values (outliers). A high Kurtosis value shows significant area in the upper and lower tails of the PDF at the expense of the mid portion, indicating a likelihood of N 1 4 extreme outliers. The normal distribution has a Kurtosis = ? yn ? y 4 Kurtosis value of 3, distributions with a larger KurN ?σ n=1 tosis are more prone to extreme outliers.

∑(

)

Low Kurtosis

High Kurtosis

Crest Factor: Defined as the ratio between the absolute maximum value and the standard deviation, this measure is useful for assessing how ‘peaky’ a signal is. It is very similar to the Kurtosis value but is better for detecting a single extreme and possibly anomalous value, that might otherwise be averaged out with the Kurtosis method. This approach is useful for spike detection.

CF =

max ( y )

σ

30

Tutorial 2—Basic signal analysis in GlyphWorks

Tutorial 2—What we have learned
In this tutorial we have learned how to view data signals in GlyphWorks and how to use various engineering analysis glyphs to perform the most common signal processing functions. We have paid particular attention to anomalies and have introduced some of the most revealing analysis techniques. We have considered: ? ? ? ? Time at Level and Amplitude Probability Distribution Rainflow Analysis Frequency Spectrum Analysis Statistical Analysis

Building up glyphs and saving your workflows
During tutorial 2 you have used three analysis processes. A strength of GlyphWorks is that you can concatenate virtually any number of glyphs to tailor your own analysis procedure. We can combine all the analyse discussed here into one GlyphWorks process that we can use to rapidly check a measured time signal for anomalies. We can then save this process for reuse at any time. If you have time, build a full anomaly detection process using the glyphs we discussed here. You can add other functions if you need to. Have a look at the online manual for more information on all the available glyphs.

31

Tutorial 3—Data Manipulation in GlyphWorks

Data manipulation
Learning objectives
In this tutorial we’re going to take a real time signal that contains many of the anomalies previously discussed and cleanse this so it’s suitable for use in a fatigue life analysis. The data was collected under actual working conditions from a strain gauge attached to a cooling fin rotating on a shaft. We’ll start be discussing the fatigue problem and then look at how we can clean the data.

GlyphWorks 2.2—Tutorial 3
32

Topic 1 – Description of the data and how it was collected In this topic we discuss the origin of the data and describe the problems experienced by the engineers during its collection. Topic 2 – Extracting usable data from a time series file In this topic we look at methods for extracting sections of data from a much larger file. Topic 3 – Graphical data editing In this topic we will use the Graphical Editor to manipulate the erroneous data and correct for a change in calibration. Topic 4 – Detecting and removing spikes In this topic we learn how to use the amplitude distribution and spike detection glyphs to detect and remove spikes from the data file. Topic 5 – Removing signal drift and electrical interference from a signal In this topic we learn how to use the Butterworth Filter Glyph to effectively remove unwanted frequencies from the data. These relate to a 50Hz electrical line interference and a low frequency signal drift. Topic 6 – Calculating stress from gauge results in strain In this final topic we learn how to use the Calculator Glyph to convert the measured data from strain in ?ε to stress in MPa.

Pre-requisites
You must have completed tutorials 1 and 2. You’ll also need the file ‘Sg.dac’ in your working directory.

Tutorial 2—Data Manipulation in GlyphWorks

Topic 1 –Description of the data and how it was collected
Before we start to analyze the data we need to know where it came from and why it was necessary in the first place. In this topic we discuss the origin of the data. Before you move on to the next topic, you might want to run this data through your anomaly analysis process, developed in the previous tutorial, to see if you can identify all the problems. Introduction to the design problem This tutorial is based on a real engineering problem. It involves a failure investigation on a ducted shaft. The shaft is used to drive a particular machine and also transports high-pressure hot gasses through its core. Cooling fins are mounted along the length of the shaft as shown in the figure below. A steel ducting surrounds the shaft and cooled water is passed through it. The rotating fins circulate the water through the pipe. The shaft rotates at a steady 1.48Hz. The data sample was taken from a strain gauge located adjacent to the root of the fin and measures the vibration loading at the root. The vibration load arises through the turbulent flow of the cooling water over the fin. The data acquisition was fairly traumatic and the resulting strain gauge data has known problems. The strain gauges were not thermally matched to the fin material and therefore the strain readings change with temperature. This causes drift in the signal. As the shaft operates at a steady temperature this is not a significant problem. The intention was to bring the shaft up to working temperature and then calibrate the gauges.

The calibration was completed after 900 seconds but unfortunately the strain gauge signal was lost after 1200 seconds and it would be imprudent to conduct an analysis on only 300 seconds of data. The cost of repeating the test is high so the engineer wants to use the data already collected. We therefore have to analyse the data and cleanse the signal so it can be used for fatigue analysis. Following the test, it was noticed that an electric arc welder was also in use in the next room. This has introduced spikes in the data. In this tutorial we take the anomalous time signal and ‘cleanse’ it to recreate what we think the data should have been had the anomalies not been present.

Measured Data

Cleansed Data

To cleanse this data we need to carry out the following manipulation steps: 1. Extract the intact signal from 0 to 1200 seconds and discard the region containing dropout 2. Edit the signal to remove the recalibration step 3. Identify and remove the spikes 4. Frequency filter to remove the low frequency drift and high frequency electrical line interference 5. Verify that the signal is long enough for a statistically representative fatigue analysis When we’ve finished we’ll be able to use this signal in the fatigue analyses in the next tutorials.

33

Tutorial 3— Data manipulation in GlyphWorks

Topic 2— Extracting usable data from a longer time signal file
In this topic we look at how to extract the good data from the signal and discard the poor data. ? ? Create a New GlyphWorks Process using the menu ‘File | New Process’ From the available data tree drag the time series file sg.dac onto the analysis workspace. Click the ‘Display’ box shown on the bottom of the Glyph to see a miniature plot of the data.

?

Enlarge the time signal plot by dragging the bottom corner of the Glyph or pressing the maximise button at the top right hand corner. Select the good data between 0— 1200 seconds, (see the adjacent information panel). To see the results so far, drag the XYDisplay glyph from the glyph palette and link it to the output pad. You’ll notice that it contains only the good data.

?

?

?

This facility is also useful for separating a single event from a long data file. For example, we might switch on the data acquisition unit at the start of the day and then perform a number of tests one after the other without stopping the unit in-between. When we then come to analyse the data we often want to separate the long record into a number of shorter distinct events. In this way we can compare the damage created by each event and quickly see where problems might arise with our product. The Graphical Editing Glyph is also used to select and manipulate data. This has some more advanced features than those seen here and you’ll see this in action in the next task.

The Time Series Input Glyph allows us to pick regions of data to analyse. This is useful in this example because the data after 1200 seconds contains erroneous drop-outs, so we can use this facility to highlight only the good data for analysis. The above flow shows how we selected the good data in the Input glyph and then demonstrates how this is passed on to following glyphs with the bad data being discarded.

34

Tutorial 3— Data manipulation in GlyphWorks

Selecting data in GlyphWorks—a few hints
There are several ways in GlyphWorks to select a section of data from a time history file. ? Hold down the control key and click and drag the section over the data. You will see something like the plot shown below on the left. You can zoom in on the 1200 second mark to refine the selection by dragging the orange selection square, as shown on the right. Hint: Don’t move the mouse cursor too quickly—let your computer ‘catch up’ with its movement.

?

If you know the exact times at which the good data starts and finishes then you might prefer to enter these values numerically rather than Graphically. Right click on the TSInput Glyph and select ‘Properties’ from the menu. Now click on the ’Advanced’ tab and notice the property called ’MarkedSections’. This gives the time coordinates you specified in the graphical selection above. You can edit these values now for the exact time, 0—1200 seconds.

The syntax is always the same: enclose the range in curly brackets and separate the start and end points with a comma. You can enter any number of such ranges in the Marked Sections field as shown below. You can combine these methods by first of all making a graphical selection and then refining the coordinates by editing the numeric values in the Advanced Properties.

Hint: To ensure that you save the marked sections for re-use later, click Store Marked Sections = True. This is a real timesaver if all the input files have the same range, for example if multi channel input files all have spikes at the same point in time.

35

Tutorial 3— Data manipulation in GlyphWorks

Topic 3—Graphical data editing
In this topic we see how to use the Graphical Editing Glyph to remove the recalibration error in the data. We can select the area of data to be corrected and then tell the glyph to automatically rescale it for us. ? Disconnect the XYDisplay from the TSInput Glyph used in the previous task. (Right click on the pipe and select ‘Disconnect’ from the menu) From the glyph palette’s Function menu drag the ‘Graphical Edit’ glyph onto the workspace and link it to the TSInput Glyph. Re-connect the XYDisplay Glyph to the output of the Graphical Edit Glyph so you can see the result of the editing. Now press the Run button to pass the ?

input data through the Graphical Editor ready for processing. In the Graphical Editor, zoom-in to the step and select all of the data from the step to the end of the signal. You must now tell the Graphical Edit glyph what to do with the selected data, in this case offset it by –5000?ε in order to line it up with the rest of the signal. Right click on the Glyph and set the properties to: Edit Method = scale and offset Offset=-5000 ? From the Advanced properties form select ‘StoreMarkedSections’ = True, this will remember these coordinates for future use. It is always a good idea to save your flow regularly so do it now.

?

?

?

?

?

Change the EditMeth to Scale&Offset and then select an offset of –5000?ε.How did we know the offset was – 5000uE? It was assessed visually by changing the Graphical Editor XY Graph property ‘Labels’ to YAxis=Label major, and the Styles property had the Grid box checked.

The GlyphWorks process for this task is shown above. The difficulty with this analysis is in selecting the exact start time for the recalibration. At full scale the width of the cursor can cover several seconds of data. The trick is to select the approximate location and then zoom in on the start point (click either side) and drag the orange selection square until you have the exact point. This is shown in the plot opposite. The only remaining task now is to change the properties so the Graphical Editor will undo the calibration offset.

36

Tutorial 3— Data manipulation in GlyphWorks

Topic 4 – Detecting and removing spikes
In this topic we look at spike identification and removal. We introduce 3 methods of spike detection and compare their results. We then see how to use the Graphical Editing Glyph to display the spikes and automatically remove them. This topic is split into the following parts: a. b. Using the Amplitude Distribution Glyph to detect spikes Using the Spike Detection Glyph and the Graphical editor to display and remove spikes Using the Differential (or gradient) method Using the Statistical method

In order to use these you need to specify the appropriate threshold parameters. Each method is suited to a particular type of spike and you might have to use two methods to completely remove all your spikes. Click the run button. The results look like this: Spike identification is very subjective and it is not . recommended that you rely on a purely automatic removal without first validating it. In the interactive mode shown in this tutorial, you choose the method and the threshold values and the program searches for all the spikes. A graphical view of the spikes is then presented so the user can check whether they agree with the choice. Spikes can be removed graphically using this mode. The procedure is outlined below.
Choose a method and determine the threshold values

c. d.

NO Choose new method / threshold value

Run spike identification and confirm the choice using the interactive Graphical editor

About spikes Spikes are a common problem with strain gauge data. You could manually edit the spikes using the graphical editor if you wanted. Simply zoom in on the spike and overwrite it with a ramp, for example. However this can become tedious where many spikes are present or where many channels of data have been recorded. For this reason, engineers have been looking for methods of automatically detecting and correcting spikes. You can identify and remove spikes with a lot of help from GlyphWorks though, as you will see. To date, no one has invented a completely reliable method to accurately detect all types of spike. Most methods seem over-sensitive and engineers themselves have differing opinions on what really is a spike. Many times, either too many or too few spikes can get removed from real data – each with consequences. The main detection methods available in GlyphWorks are: ? ? ? ? Amplitude threshold detection Differential (or gradient) threshold detection Statistical threshold detection Crest factor threshold detection
Agree?

YES

In automatic mode you enter the method and threshold values and GlyphWorks automatically removes any spikes from any number of time histories entered. The recommended approach is to process a few files interactively until you gain confidence in the method and threshold values. When you are confident then you can process the remaining files automatically. It should be remembered that time histories taken from different locations or using different types of transducers, e.g. strain gauges or accelerometers, will all exhibit different spike characteristics and will therefore require different methods and threshold values. In the following topics you will look at the theory of each method in turn and see how well they work with our data. You will then use the interactive mode to remove the spikes using the most appropriate method.

37

Tutorial 3— Data manipulation in GlyphWorks

Topic 4a—using the Amplitude Distribution glyph to detect spikes
This is the simplest method for detecting spikes. It is suited to spikes that have large amplitudes compared to the rest of the data. If spikes have an amplitude comparable to the amplitude of the data, then this is not the method to use. 1. 2. Disconnect the XYDisplay Glyph from the Graphical Editor From the Glyph pallet’s Function menu, drag the Amplitude Distribution glyph on to the output from the Graphical Editor Reconnect the XYDisplay Glyph to the output of the Amplitude Distribution Glyph Right click on the Amplitude Distribution Glyph and set the property: Analysis Type = pointCount Click on the XYDisplay Glyph and change the Y axis to show log Y values.

Spikes

Spikes

Spikes?

3.

4.

5.

The method is illustrated in the diagram below. The spikes are clearly seen to have an amplitude much larger than the rest of the data. We need to establish threshold values that clearly tell the spike removal glyph how to discriminate spikes from real data. These thresholds are best obtained from the Amplitude Distribution. The high amplitude spikes are seen as individual points that lie outside of the main statistical distribution; the log axis emphasises this. In our example, some of the spikes are very easy to see as these have an amplitude that is much larger than the rest of the real data. We could use the plot above to determine reasonable threshold values to use in the automatic Spike Removal Glyph. A maximum value of 2000?ε is clearly appropriate and we could try a value of –6500?ε for the lower limit. There are a few spikes that we are unsure about, however. These lie very close to the main statistical distribution and might therefore represent real data.

On running the flow you will clearly see some spikes whose amplitudes lie outside the main statistical distribution.

Upper threshold

Lower threshold Amplitude Probability Plot

Choose appropriate threshold values by looking at the Amplitude Probability plot. Spikes lie outside of main statistical distribution.

38

Tutorial 3— Data manipulation in GlyphWorks

The Automatic Spike Detection Glyph is designed to pick out spikes according to the method and threshold values chosen by the user. It’s the job of the Graphical Editor to then remove these spikes automatically. The Spike Detection Glyph therefore has two output pads, the blue pad passes the unedited time signal through, while the orange pad passes on the locations of all the spikes found. The Graphical Editor has two input pads. In task 3 we only used the blue input pad to pass the time signal data. In this example we also use the orange pad to pass the anomaly data containing the spike locations. The Graphical Editor knows that it is connected to the Spike Detection Glyph and therefore is required to ramp over the spikes. It clearly highlights the spikes so you can see them and confirm whether or not you agree with the choices. You can edit the method and threshold values to refine the selection. In this example we see that the amplitude method has correctly spotted a number of spikes but has incorrectly marked some of the real data towards the end of the signal and has missed some of the spikes towards the beginning. At this stage we could go into the Graphical Editor and manually select the spikes and then use the property EditMethod = Ramp. Alternatively we could try another approach.

Topic 4b—using the Spike Detection Glyph to automatically identify and filter spikes.
1. From the Glyph pallet’s Anomaly menu, drag the Spike Detection Glyph on to the output of the Graphical Editor as shown. From the Glyph pallet’s Function menu, drag another Graphical Editor Glyph on to the output of the Spike Detection Glyph. Ensure that both blue and orange pads are connected between the Spike Detection and the Graphical Editor Glyphs. (The blue pad passes the time signal data, while the orange pad passes the spike information. Right click on the Spike Detection Glyph and select Properties. Use the following values: Method = Amplitude, YMax = 2000, YMin = -6500. Run the process to see the identified spikes. You could add another Display Glyph to the output of the second Graphical Editor to confirm these spikes have been removed.

2.

3.

4.

5.

39

Tutorial 3— Data manipulation in GlyphWorks

Topic 4c—using the Differential (or Gradient) Method
1. Delete the Amplitude Distribution and its accompanying Display Glyph, we won’t need them in this exercise again. Right click on the Spike Detection Glyph and change the Method = ‘AutoDifferential’ Run the flow and notice how few spikes have been identified by this method.

Not all spikes have amplitudes significantly greater than the parent data. The signal below shows a time signal that contains some smaller amplitude spikes. We can identify these by comparing their gradient (point-point rise time) with that of the parent data. Spikes are typically characterised by 1 anomalous point and therefore have a very steep slope that can be seen on a Gradient Probability Plot. GlyphWorks allows us to specify a threshold gradient above which the data is recognized as a spike. In many cases the threshold value can be obtained automatically so the user doesn’t need to do anything else. This is one of the most reliable and simple spike detection methods, unfortunately it is quite unsuitable for our analysis because we have a high frequency anomaly in the form of electrical line interference. If we were to zoom into any part of the data we would see the very high frequency jagged profile. This gives rise to high gradients in the data and make it almost impossible to distinguish the spikes from the electrical interference. We could choose to filter out the electrical interference first of course, but this would present an additional complication because it would also tend to smooth out the spikes making them even harder to distinguish from the real data. We have no option but to try an alternative method in this case.

2.

3.

At high magnification we see the high frequency electrical interference that is confusing our spike detection analysis.

Spikes

Choose threshold value by looking at the Rise Probability plot. Spikes lie outside of main statistical distribution.

Rise/Gradient Probability Plot

40

Tutorial 3— Data manipulation in GlyphWorks

The Statistical Method is slightly more sophisticated but is still straightforward and easy to use. It calculates a running standard deviation (SD) of the data, and valid data is deemed to lie within a region defined by some multiple of the SD. This is illustrated in the diagram below. The only problem with this method is choosing the threshold SD value to use. As a rule of thumb, a SD of 3 or 4 is usually good starting point. GlyphWorks makes it easy to view the output so that you can quickly gauge if the threshold is reasonable. A problem might arise when several contiguous data points have the same value, as in the case of a horizontal line. In this case the running SD would tend towards zero and all points immediately after the line would register as spikes. A gating value is therefore assigned so that the minimum SD does not drop below this value. A gate of 5% - 7% is a usually a good starting point. This method will find spikes with an amplitude comparable to the amplitude of the data. It is generally a very good method.

Topic 4d—using the Statistical Method
1. Right click on the Spike Detection Glyph, change the Method = Statistical, and enter NumStanDevs = 4 and GateValue = 5 Run the flow and notice how all the spikes have been correctly identified. Zoom-in to any of the spikes to get a good perspective and press the golden arrow buttons to toggle between them. DO you agree with the choice?

2.

3.

Calculate the running standard deviation. A spike is detected when a value passes through some multiple of this.

Zoom-in to any of the spikes to get a good perspective and press the golden arrow buttons to toggle between them. Do you agree with the choice?

41

Tutorial 3— Data manipulation in GlyphWorks

Topic 5a —Frequency filtering with a Butterworth Filter
In this topic we introduce the Butterworth filter and show how it can be used to remove unwanted frequencies from our data. In this example we are interested in removing the 50Hz electrical interference and the low frequency drift associated with the temperature variation. 1. Drag a Butterworth Filter glyph on to the output pad of the second Graphical Editor (see screen shot opposite)

2.

So that you can see the result, attach an XYdisplay glyph to the Butterworth filter’s output pad. On the ButterworthFilter glyph change the properties to: Type=BandPass Frequency1=0.5 Frequency2=20 Run the analysis and notice the warning in the Diagnostics panel. On the ButterworthFilter Glyph change the property DCwarning = 0 to disengage the warning and rerun the analysis.

3.

4. 5.

Frequency Filtering
We need to remove the very low frequency drift and the 50 Hz (UK) electrical line interference. There are a number of frequency filtering tools within GlyphWorks and these are covered in detail in nCode’s training courses. We will only look at the basic Butterworth filter here because this is the most widely used. There are two basic filter types available known as ‘Low Pass’ and ‘High Pass’. From these are derived the ‘Band Pass’ and ‘Band Reject’ filters. The graphic below illustrates the effect.
Low Pass

A frequency filter is used to remove frequencies from a time signal. In this example we need to remove low frequency drift, below about 0.5Hz, and high frequency electrical line interference, roughly above 30Hz. It is alright to remove all frequencies above 30Hz as the PSD tells us that only electrical line interference is present above this value and no real data is affected. The lower limit is harder to choose because it is hard to see where the drift ends and the real data begins. Also filters do not cut in as sharply as the illustrations suggest. They tend to roll in gradually and therefore we are quite likely to lose some real data. We will conduct a quick check on this later to see if it is acceptable.
High Pass

Frequency Hz Low frequencies Pass through the filter High frequencies get cut

Frequency Hz Low frequencies get cut High frequencies Pass through the filter

Band Pass

Band Reject

Frequency Hz This band of frequencies pass through, all others get cut This band of frequencies get cut all others pass through

Frequency Hz

42

Tutorial 3— Data manipulation in GlyphWorks

The Butterworth filter is renowned for its reliability, speed and ease of use. The Glyph will issue a warning in the rare case where there might be a problem so the user is aware of the fact and can perform the necessary checks. In this example we must filter the very low frequency drift below approximately 0.5Hz. As this is below the Glyph’s DC warning threshold of 2% of Nyquist, the process will error and force you to change the threshold limit. This is done to ensure you are aware of the possibilities of your actions. We can check that the filter has performed properly by looking at the frequency content both before and after filtering and also by looking at the output time signal to make sure there are no strange amplitude modulations, etc. You can compare the PSDs by changing the XYDisplay properties to overlay and varying the line stiles as shown in the plot below.

Topic 5b—Comparing the Frequency Spectra
In this topic we compare the frequency content of the data both before and after the filtering. This will ensure that no spurious effects have been introduced while filtering below the warning threshold. 1. Drag two Frequency Spectra glyphs from the glyph pallet and connect them to the output of the second Graphical editor and the Butterworth filter glyphs respectively as shown below. Drag an XY Display glyph from the pallet and attach a red input pad to each Frequency spectrum glyph. Run the analysis to compare the two PSD plots. You can choose to overlay the two spectra by pressing the button.

2.

3. 4.

See glyphref.pdf for a detailed description of the butterworth filter.

43

Tutorial 3— Data manipulation in GlyphWorks

Topic 6—Calculating stress from strain using the Calculator Glyph
In this topic we use GlyphWorks’ Calculator Glyph to convert the measured strain gauge signal into stress for use in subsequent fatigue analyses. 1. Drag the Time Series Calculator glyph onto the output pad of the Butterworth Filter. Drag a XYDisplay glyph onto the Calculator’s output pad so you can see the results. Run the process so the Calculator can see all the data first of all. Right click on the calculator and select properties, then edit the equa-

tion as shown on the opposite page. 5. Rerun the flow and see how the data has been converted to stress in MPa. Drag the Time Series Output Glyph on to the Calculator output and change its properties so it saves a file called “sg1_stress.dac” for use in the SN fatigue calculation. Drag another Time Series Output Glyph on to the Butterworth Filter output and change its properties so it saves a file called “sg1_strain.dac” for use in the EN fatigue calculation. Run the analysis and then press the button to refresh the data tree.

6.

2.

7.

3. 4.

8.

the Calculator knows what data channels are available. You can now right click on the Glyph and select the Properties option. This, unlike any of the other Glyphs we’ve seen so far, is laid out like a proper scientific calculator. The top area lets you define the properties of your new data channel. Essentially we are taking one or more input channels and are manipulating these algebraically to create a new output channel. In this example we choose to over-write the original input channel number 1 with the derived Stress data in units of MPa, as shown in the picture opposite. We could have chosen a new channel number if we’d wanted to; in this case we’d then have both sets of data available for subsequent analysis. You can pickup the strain data from the Available Channels box by double clicking on it. The Available Channels box lists all the data channels that are available for use in your equations, when you double click you’ll notice that that particular channel has now been added to the equation editor window ready for use in your equation. You can now complete the equation just like any other calculator. You can use the buttons provided or type the equation from the keyboard. When you have finished simply click on the button titled ‘Add to List’ and that equation has now been entered in the Currently Defined Equations list. You can add any number of equations to the list and they are all evaluated in sequence from top to bottom. You can even use results in one calculation that were derived from an earlier one. To edit an equation at some later date simply select the equation from the Currently Defined Equations panel and press the ‘Edit’ button to move it into the editor panel.

GlyphWorks can carry out stress life (SN) fatigue analyses with stress based input files. As you will have noticed from the tutorials so far, the example time series file was collected from a strain gauge, and is in microstrain (?ε). So, in this topic you will convert it into stress units by applying the formula:

σ =

ε ?E 10 6

(where 106 converts from microstrain to strain, E = 203400 MPa and represents the elastic modulus of the material RQC100 that is used in the later fatigue analyses.) The Time Series Calculator is a very powerful scientific calculator that can be used to edit data or create new data channels by algebraically manipulating the existing channels. Before using the calculator you should first of all connect it to the appropriate input Glyph (the Butterworth Filter in this case) and then run the flow. This step is necessary so

44

Tutorial 3— Data manipulation in GlyphWorks

This box shows all the data channels currently available for use in you calculation. If you create a new channel then this will also popup here for you to use in subsequent equations.

Channel number and description of your new output data channel. You can overwrite an existing channel number if you want.

This is the edit box where you’ll edit your equation

You can use these calculator buttons or type the equation on the keyboard

Your finished equation is added to the equation list displayed here. You can edit the equation at any time as well as changing the order of calculation and removing the equation all together.

Conclusion to tutorial 3
In this topic we have used many of the manipulation tools available in GlyphWorks. The highly imperfect input file that we started with has been converted into 1200 seconds of viable data that can be used to run a fatigue analysis. The fact that it also exists in both stress and strain versions means that it can be used for EN and SN analyses. We have also used many of the advanced features of GlyphWorks. In tutorials 4 an 5 we will carry out fatigue life predictions with what we have created.

45

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Stress life (SN) fatigue analysis
In this tutorial we’ll introduce the SN Fatigue Analysis Glyph in GlyphWorks 2.2. We start with a brief introduction to fatigue theory and then learn how to apply it to the rotor blades that were analyzed in tutorials 1 –3. We’ll learn how GlyphWorks 2.2 can be used to predict the life of the blade, and to carry out: ? Sensitivity studies to ascertain how variations in some inputs can affect predicted inservice life ‘What if’ analysis to see how varying the material influences predicted life ‘Back calculations’ in which the life is fixed and input parameters are calculated

GlyphWorks 2.2—Tutorial 4
46

? ?

We will also investigate the effect of notches and welds on fatigue life

Learning objectives
Topic 1 – An introduction to SN fatigue life prediction After completing this tutorial you will understand the engineering principles behind SN fatigue Topic 2 –SN fatigue life prediction with GlyphWorks 2.2 After completing this tutorial you will be able to enter all of the parameters for a fatigue life prediction Topic 3 – Post processing fatigue In this tutorial we will examine the rainflow and damage histograms, and view time correlated damage plots and compare these with the original input file. Topic 4 – Sensitivity studies, what ifs, and multiple calculations After completing this tutorial you will know how to gauge the expected variability in fatigue life. Topic 5 – The effect of notches on fatigue life After completing this tutorial you will understand how notches affect the fatigue life and how to use the stress concentration factor Kt to produce more accurate life predictions. Topic 6 – Fatigue of welds After completing this tutorial you will be able to perform fatigue analysis on welded joints.

Pre-requisites
You must have completed tutorials 1 to 3. You’ll also need the file ‘Sg1_stress.dac’, created in tutorial 3, in your working directory.

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 1 – an introduction to SN fatigue life prediction
After completing this tutorial you will understand the engineering principles behind SN fatigue.

In this exercise we take the measured strain gauge time signal from the cooling fin and use it to estimate the fatigue life of the component. The strain gauge was positioned directly over the critical stress region on the cooling fin as shown below.

W?hler's rotating bending fatigue test Between the transition and the endurance limit (approximately 106 – 108 cycles), SN based analysis is valid. Above the endurance limit, the slope of the curve reduces dramatically and, as such, this is often referred to as the “infinite life” region. In practice, however, infinite life is seldom achievable. For example, Aluminium alloys do not exhibit infinite life, and even steel does not exhibit infinite life when subjected to variable amplitude loading. The GlyphWorks analyser uses a “Tri-linear” curve to represent the W?hler line. This is made up of three logarithmic segments relating to the low cycle (plastic) regime, the high cycle (elastic) regime and the ”infinite life” regime, respectively. Two typical SN curves are shown on the next page. These represent the low alloy steel, MANTEN by US Steel Corporation, and the high strength steel, RQC100 by Bethlehem Steel Corporation. The dotted line below 1000 cycles represents the low cycle regime. The change in gradient at 108 cycles shows the effect of the endurance limit.

Location of strain gauge Quick overview of the SN method The SN (or ‘Nominal Stress’) approach is the oldest method of fatigue estimation. The German railway engineer, August W?hler, developed the method between 1852 and 1870. W?hler studied the progressive failure of railway axles using the test rig shown below. He subjected two railway axles simultaneously to a rotating bending test. He then plotted the nominal stress versus the number of cycles to failure, which has become known as the SN diagram. Each curve is still referred to as a W?hler line. The SN method is still the most widely used method today and a typical example of the curve is shown in below. Several features are notable about the W?hler line. First, we note that below the transition point (approximately 1000 cycles) the SN curve is not valid because the nominal stresses are now elastic-plastic. This is sometimes known as ‘low cycle’ fatigue on account of there being a low number of cycles to failure. It has been shown that fatigue is driven by the release of plastic shear strain energy; therefore above yield, stress loses the linear relationship with strain and cannot be used. This region is addressed by EN, (or the Strain Life) method discussed later.

47

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

S -N D ta P t a lo
MANTEN SRI1: 1717 b1: -0.095 b2: 0 RQC100 SRI1: 2199 b1: -0.075 b2: 0
1E5

E: 2.034E5 UTS: 552 E: 2.034E5 UTS: 863

Topic 2—SN Fatigue Life Estimation
In this topic we will carry out a simple SN fatigue analysis to estimate the life of our component. The basic glyph process is easy to create; what is trickier is to establish the properties to use. To assemble the glyphflow: 1. 2. Start a new worksheet by selecting ’File | New’ from the menu. Drag the stress based time series file created in the previous tutorial ‘sg1_stress.dac’ onto the workspace. Drag the Stress Life (SN) glyph on to the output pad of the time series input glyph. Drag a MetaData Display glyph on to the output pad of the SN glyph Right click on the SN Glyph and select ‘Properties’ from the menu. Enter the calculation properties listed below. Run the process and expand the tree in the Meta Data display: Channel1 Meta Data / StressLife1_Results. Alternatively, to see the results in a table format, right click on the Meta Data Display Glyph and select ‘Display Type = Results’

Stress Range (MPa)

1E4

1E3

1E2 1E0

1E1

1E2

1E3

1E4

1E5

1E6

1E7

1E8

1E9

Life (Cycles)

SN curves for MANTEN low alloy steel and RQC100 high strength steel To calculate the fatigue life of a component, the analyser needs the SN curve for the material and a time history representation of the varying stress field at the point of failure. This is commonly recorded using a strain gauge signal similar to the one used in the previous tutorials. First of all the analyser will carry out a rainflow analysis of the time signal to extract the fatigue damaging cycles. It will then use the SN curve to determine the damage caused by each cycle and then sum the damage to calculate the total accumulated damage in the signal. This process is all automated as shown in the following exercises. For more information on fatigue theory refer to nCode’s Fatigue Theory training course, and check the online manual.

3.

4. 5.

6.

In the stress life glyph enter these properties: Material Data ? ? ? ? ? ? Material Data Source = Properties Database Name = leave empty Materials Name = RQC100 Surface Finish = Polished Surface Treatment = None Certainty of Survival = 50

? ? ? ? ?

SRI1 = 2199 B1 = -0.075 NC1 = 1E+8 B2 = -0.039 Standard Error = 0

Advanced ? Mean Stress Correction = Goodman

Material ? ? Material Stress Units = MPA UTS = 863

48

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

After you have run the process the meta data glyph should be populated with fatigue analysis data. The life should be 3.334E6 repeats of the input file. Given that the input file is 1200 seconds long that is a life of around 127 years! Keep this glyph process on screen, or save it as we will be using it again in the next topic.

Stress life in repeats of input file

49

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 3—Post Processing Fatigue Data
In this topic we examine the rainflow and damage matrices output by the Stress Life Glyph to ensure that the data is sufficient for our purposes. 1. 2. Drag a Histogram Display glyph onto the worksheet. Connect the two histogram input pads to the 2nd and 3rd output pads 3.

(damage and rainflow histogram) of the Stress Life Glyph as shown opposite. Run the analysis process to see the histograms. What do you conclude about this data? Is the sample length sufficient to determine a statistically sound estimate of fatigue damage?

We have talked about rainflow cycle counting before in earlier tutorials. It is a process to extract fatigue-damaging cycles from a random time signal and when plotted in 3 dimensions can prove a valuable analysis tool in its own right. You will remember how we used this plot to identify spikes and insufficient sample size. Before any fatigue analysis is done, the analyser will perform a rainflow extraction on the time signal. The plots for this are made available by connecting the output pads to the histogram display glyph, so that you can satisfy yourself of the adequacy of the input data. You might wish to perform your own rainflow extraction using the ‘Rainflow’ Glyph introduced in Chapter 2 ,and thereby by-pass the automatic routine in the Stress Life Glyph. The Stress Life Glyph has 3 input pads and data can be provided to any of these, although you cannot connect more than one at a time. The top (blue) pad accepts time signal data, whereas the middle (red) and bottom (brown) accept data from the Rainflow Glyph thereby bypassing the internal rainflow analysis in the Stress Life Glyph. The difference between the middle (red) pad and bottom (brown) pad is the type of data expected. The middle pad expects the rainflow histogram whereas the bottom pad expects a list of cycles. The histogram format tends to ‘bin’ the data into a small number of columns thereby improving its plotting clarity at the expense of numerical precision; whereas the listed data is more exact. Bypassing the internal routine is only necessary if you want to edit the histogram before estimating the fatigue damage.

The Stress Life Glyph will output the Rainflow histogram and cycle list just like the Rainflow Glyph does. It will also output a corresponding damage histogram that represents the proportion of damage attributable to each histogram bin. For the example data used here, notice the well-defined ‘arrowhead’ shape to the rainflow cycle count, indicating that this is likely to be good clean data. As fatigue damage varies exponentially with cycle range, most of the damage will be carried out by the few cycles at the tip of the arrowhead. Compare the damage histogram with the rainflow histogram to see which cycles are the most damaging. Do you think we have sufficient sample length here to derive a statistically representative life? A cursory glance tells us that nearly all of the damage can be attributed to approximately 15 cycles so we are unlikely to expect good statistical confidence with these results!

50

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

About the Stress Life input and output pads The SN glyph has a number of input and output pads; these are listed below. Only one input pad can be connected.

Input

Output

Time signal Histogram from Rainflow Glyph Cycle list from Rainflow Glyph

Time correlated damage Damage Histogram Rainflow Histogram Meta Data Rainflow Cycle list

51

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 4—Sensitivity Studies
In this topic we investigate the significance of minor variations in input parameters on the fatigue life of our component. This type of sensitivity study is useful in determining which parameters are most influential and what probable range of life will be delivered in service. To help record these results we will first of all switch the Meta Data display glyph into table view and tell it to collate the tests. 1. Right click on the Meta Data glyph and select properties from the menu. Set ’Display Type = Results’ and ’Collate Tests = True’.

Up to this point we have treated fatigue analysis as a simple deterministic process resulting in a single value for life. In reality, fatigue is very statistical in nature. A company producing aircraft, for instance, might produce two ‘identical’ planes and sell them both to the same customer for use on the same route. In theory they should last exactly the same length of time, but in practice we realise that some components will fatigue more quickly on one aircraft than the other. The difference in fatigue life is due to small statistical variations in the materials used, the quality of production, the load spectra experienced, and abuse by the crew. If we were to take a large enough sample of aircraft, we could plot the probability density function of life for a particular component. A typical plot is shown below.

Now proceed through the following tasks and read the accompanying notes to ascertain the sensitivity to: a. b. c. d. e. f. Overload Various surface finishes Various surface treatments Mean stress correction algorithm Residual stresses Back calculation for Factor of Safety analysis

PDF of fatigue life For this reason, it is usually wise to use the software to determine the sensitivity of damage to all the various input parameters, and to carry out multiple life predictions. The end objective being to determine the expected ‘spread’ of life and the most critical parameter affecting it. GlyphWorks has been specifically written to facilitate this type of study. We usually proceed by varying each parameter separately and noting its significance on life. After each exercise we reset the parameter to its original value before proceeding to the next parameter thereby avoiding confusion.

52

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 4a—Sensitivity to Overloads
In this topic we investigate the sensitivity to overloads. We will compare the fatigue life after scaling the input loads by 10% and also after reducing them by 10%. This will enable us to assess the significance of loading and help us to determine what the likely spread of life will be in-service. 1. 2. Right click on the Stress Life Glyph and select ‘Properties’ from the menu. Enter the property Scale = 1.1 and rerun the analysis. Notice how the result is appended to the end of the Metadata table. Repeat the above using a Scale = 0.9 and rerun the analysis. Note the new values and then reset the scale factor back to Scale = 1.0 ready for the next topic.

Topic 4b—Sensitivity to surface finish
In this topic we investigate the sensitivity to surface finish and asses the extent to which this could affect our fatigue life. 1. From the Stress Life properties, change the Surface Finish = Average Machined and run the analysis Now rerun the analysis using Surface Finish = Ground Note the results and reset the Surface finish = Polished

2. 3.

3. 4.

Sensitivity to Overloads The ‘Scale Factor’ property allows us to assess the sensitivity to possible calibration errors during data collection or possible overloads in-service. We are often unsure how representative the measured time signal is of the real loads expected by our component so it is prudent to carry out this sensitivity study to quantify the effect.

Sensitivity to surface condition The original analysis was based on material data obtained from a specimen with a mirror polished finish. Such a high quality finish is expensive for production components so GlyphWorks allows you to run the analysis with 10 different surface finishes to estimate how less expensive surfaces affect the expected life. Surface quality can significantly affect fatigue life under high cycle fatigue but has less significance in low cycle fatigue where the relatively high loads dominate the fatigue process. On the stress life glyph change Surface Finish to Average Machined. Run the analysis and note that the predicted life falls dramatically. Is the life too low? Re-run the test with a surface finish of Ground. The life should be about half that of polished, but about three times longer than Average Machined. Now re-set it to Polished prior to the next exercise.

About Surface Finish Factors The surface finish factors offered in GlyphWorks are only intended for reference. They are only suitable for steel components and there accuracy cannot be guaranteed. You should always attempt to obtain representative material samples if you intend to model the surface finish with accuracy.

53

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 4c—Sensitivity to surface treatments
In this topic we investigate the possible benefits by applying a particular surface treatment. 1. From the Stress Life properties, change the Surface Treatment = Shot Peened and run the analysis Note the results and reset the Surface Treatment = None

Topic 4d—Sensitivity to mean stress correction
GlyphWorks can allow for the effect of mean stresses using the Goodman and Gerber corrections. 1. From the Stress Life properties, change MeanStressCorrection = Gerber and run the analysis. Now rerun with MeanStressCorrection = None Note the results and reset the MeanStressCorrection = Goodman

2. 3.

2.

Sensitivity to mean stress correction algorithm The Effect of Surface Treatments The Surface Treatments modelled in GlyphWorks can be used to improve the fatigue resistance of your component. These surface treatments effectively provide a residual compressive stress at the surface of the component that retards the development of fatigue cracks. Such treatments are only effective in the high cycle fatigue regime where the applied loads are relatively small. The large loads associated with low cycle fatigue are likely to reverse the residual compressive stresses thereby destroying the beneficial effect of the treatment.
The mean stress (residual stress) will affect the rate of fatigue damage. Viewed conceptually, if a mean tensile stress is applied to a crack, then the crack is being forced open and any stress cycles applied will therefore have a more pronounced affect. Conversely, if a mean compressive stress is applied, then the crack will be forced shut and any stress cycle would first of all have to overcome the pre-compression before any growth could ensue. The graphic shows the effect of residual stress on the SN curve. The curve reduces for tensile residual stresses and rises slightly for compressive residual stresses.

About Surface Treatment Factors The surface treatment factors offered in GlyphWorks are only intended for reference. They are only suitable for steel components and there accuracy cannot be guaranteed. You should always attempt to obtain representative material samples if you intend to model the surface treatment with accuracy. The effect of mean stress on fatigue life
For a comprehensive fatigue analysis it would be desirable to use a number of SN curves, each one for a different mean stress level. (The SN glyph can perform the interpolation required for this type of data if required.) However, the time and cost of performing so many fatigue tests is usually too much, and so we mainly rely on an empirical correction to take account of residuals. The most popular corrections for SN analysis are the Goodman and Gerber corrections. In general, Goodman proves more conservative for tensile residuals whilst Gerber proves most conservative for compressive. From a pragmatic view we usually calculate the results using all methods and use the most conservative.

54

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 4e—Sensitivity to residual stresses
In this topic we assess the sensitivity to residual stresses in the component. 1. From the Stress Life properties, change Offset = 100 and run the analysis. Now rerun with Offset = 300 Note the results and reset the Offset =0

Topic 4f—Back Calculation
In this topic we will carry out an iterative ‘back’ calculation to determine an appropriate scale factor that yields a particular life. 1. From the Stress Life properties, double click on Mode = Scale Factor. You should notice a couple of new properties appear on the form. Set the Target Life = 10000 and rerun the analysis Scroll through the metadata results and observe the ‘Scale Factor’ result.

2. 3.

2. 3.

Sensitivity to Residual Stress Residual stresses are usually introduced during manufacture through processes such as cold forming or welding. Compressive residual stresses can be beneficial for the reasons discussed earlier in the section on surface treatments; however, tensile residual stresses will actually increase the fatigue damage. We can asses the effect of residual stresses by varying the mean stress offset and seeing how this influences the component’s life. To see the effect set the Offset property to 100 (100 MPa of tensile residual). Run the analysis and note the effect on life. Now try an offset of 300 MPa and again note the effect. Reset the offset to 0 before the next exercise.

Back calculation Suppose that despite the analyses above, our component were to fail after 10000 repeats of the input signal. GlyphWorks can be used to investigate possible causes. Suppose that you suspect operator abuse, for example. What stress overload would be required to bring the life down to 10000 repeats? GlyphWorks will back-calculate to find the appropriate scale factor to achieve a specified life. In this case the calculated scale factor is about 1.5, so a 50% continuous overload would cause the shortening of life. Results So what results did you get. For the record here are our results. Don’t be concerned if you numbers are slightly different, you may have a slightly different input file after all of that cropping, de-spiking and Butterworth filtering.
Number of repeats Scale factor Approx % change

About Residual Stress Analysis This sensitivity study can only be performed if you have used a mean stress correction like Goodman or Gerber. It will not work if the mean stress correction is switched to none. A Question for you… After considering the sensitivity to Mean Stress correction and having understood residual stress analysis, do you think Goodman was a suitable choice of Mean Stress correction? After all, Gerber gave shorter lives in topic 4d! Goodman tends to be non-conservative for compressive residuals so gave a higher life than either Gerber or No Correction in task 4d. It would therefore be prudent to choose Gerber over Goodman for this particular time signal. However, Goodman will always prove more conservative when assessing tensile residuals such as those being investigated here so it is better to switch to Goodman for sensitivity studies on tensile residual stresses.

1.1 0.9

7.97E5 1.95E7 2.89E6 2.87E6 1.67E5 9.52E5 8492 5.36E5

-75 +600 -13 -14 -95 -71 -99 -84

Mean stress correction

Gerber None

Surface

Average machined Ground

Residual

300 800

55

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 5—The effect of Notches
In this topic we look at notch correction in SN analysis. Notch correction is briefly discussed in the information panel below. We are interested here in the impact of reducing the blade root radius from 5mm to 3mm. This reduction will give rise to an additional local stress concentration at the blade root that will reduce the fatigue life. 1. Make sure you have reset all the Stress Life properties back to the original values. (Life = 3.334E6 repeats) Edit the Stress Life property Kf = 1.7 and rerun the analysis. Note the result and reset Kf = 1 when you’ve finished.

Running the notch correction (Kf) analysis Enter the new properties as specified and run the analysis. What’s your result? It should be a predicted life of about 13000 repeats—a drop of over 99% caused by reducing the radius from 5mm to 3mm, and the consequent increase in the stress concentration. Fatigue lives can be very sensitive to the presence of stress raisers such as sharp corners and notches.

2. 3.

The effect of notches
In many cases it is not possible to apply a strain gauge directly over the point of failure. We may not know the exact location of failure, or the location maybe inaccessible. In many cases the stress gradient is too steep around a failure site and the strain gauge its self would be too large to obtain a good physical result. In this case it is common practice to apply the strain gauge at some remote location and use a stress concentration factor ‘Kt’ to effectively scale the nominal stress values to that encountered at the critical location. An illustration of the principal is shown below. Several people have documented values of Kt for various types of notch, the most famous being the works of Peterson. Where suitable values cannot be readily obtained, then a simple linear Finite Element analysis can be used to determine the solution. This graphic shows the effect of a circular notch in an infinite plate. GlyphWorks allows you to model the affect of notches using a fatigue reduction factor Kf. Kf is a function of the stress concentration factor ‘Kt’ and a materials sensitivity to notches. At its most conservative; Kf = Kt but is generally slightly less. You may wonder why the notch correction is not applied to the ‘Scale’ factor in nSoft. If the time history were scaled by this value it would most probably exceed yield strength and SN would no longer be valid. The Fatigue reduction approach considers the affect of local yielding. A very small region of local yielding next to a stress concentration is not as severe as when the nominal stress exceeds yield. For more information please refer to the Basic Fatigue Theory training course. Consider how the fatigue life of our component would be compromised if the 5mm radius was reduced to 3mm. The FE department has calculated an effective notch correction Kt = 1.7 for this case.

56

Tutorial 4— Stress life (SN) fatigue analysis in GlyphWorks

Topic 6—Weld Fatigue Analysis
In this topic we examine what would happen if a simple fillet weld were used to attach the fin. GlyphWorks comes complete with a database of commonly used materials and weld properties so you don't have to type them in manually. 1. Make sure you have reset all the Stress Life properties back to the original values. (Life = 3.334E6 repeats) Edit the Stress Life property MaterialDataSource = MDM_Database

3.

Notice how the material tab at the top of the form is now un-greyed. Click on the Material tab to see a list of available properties. Ignore the error box, this is merely informing you that your existing material, RQC100, is not present, so click on close to dismiss it. Select the weld ‘class F’ and either double click or press the ‘select’ button. Rerun the analysis and note the new life.

4.

5.

2.

6.

Running the Welded analysis Now you will examine what would happen if a simple fillet weld were used to attach the fin. From the Materials menu chose Class F (weld analysis). Welds are classified in accordance with BS7608. If you do not know what classification you have then you can use the British Standard and look at the illustrations. For this analysis it is appropriate to use a Class F weld for a component of 5mm thickness. The new result is only about 28 repeats! This is a drop in life of over 99.9%, purely because it is now welded. In a real engineering exercise you would almost certainly have to change the design or methods of manufacture to get a more acceptable lifespan.

Note: The material database tab is only available if the property ‘MaterialDataSource = MDM_Database’.

Conclusion to tutorial 4 You have now learned how to use the SN stress life glyph to accomplish basic calculations and carry out detailed sensitivity studies and failure analysis. You’ll find more information on fatigue analysis in the online documentation and the nCode training courses. In the next tutorial you will achieve similar things with the strain life (EN) fatigue glyph. It is comforting to recognise that both glyphs share the same sort of functionality and now you’re happy with SN analysis you should have no problem with EN.

57

Tutorial 5— Strain life (SN) fatigue analysis in GlyphWorks

Strain life (EN) fatigue analysis

GlyphWorks 2.2—Tutorial 5
58

In this tutorial we’ll introduce the EN Fatigue Analysis Glyph in GlyphWorks 2.2. We start with a brief introduction to EN fatigue theory and learn how to apply it to predict the life of the rotor blades used in tutorials 1 –4.

Learning objectives
Topic 1 – an introduction to EN fatigue life prediction After completing this tutorial you will understand the engineering principles behind EN fatigue analysis and know when to use the EN approach over the SN approach. Topic 2 –EN fatigue life prediction with GW21 After completing this tutorial you will: ? ? ? use the strain life glyph to predict the life of the blade understand the input and output from the Strain Life Glyph understand all of the parameters for a fatigue life prediction

Pre-requisites
You must have completed tutorials 1 to 4 and will also need the file ‘Sg1_cleaned.dac’ created in tutorial 3.

Tutorial 5— Strain life (EN) fatigue analysis in GlyphWorks

Topic 1 – an introduction to EN fatigue life prediction
After completing this tutorial you will understand the engineering principles behind EN fatigue.

For many components, Stage II growth may be so fast that engineers can safely ignore it. For more details on this and fracture mechanics, please refer to nCode’s Basic Fatigue Theory training course. The EN approach uses strain as an input as opposed to stress (which you used in the SN method in tutorial 4). Localised plastic shear strain is the property that drives fatigue and so strain represents a more suitable choice of input. The EN curve can be considered as a simple extension of the SN curve. Where the SN curve plots stress vs. life, the EN curve plots strain vs. life. When stresses are linear elastic (i.e. in the high cycle regime), the two curves would yield virtually the same life result. However, where the SN curve is invalid, below 1000 cycles to failure, the EN curve can still be used. The plot below shows a normalised comparison between the two curves.

Strain Life (EN) Analysis Introduction
Quick overview of the EN method With the advent of modern magnification techniques, fatigue cracks have been investigated in more detail. We now know that a fatigue crack initiates and grows in a twostage process. In the early stages a crack is seen to grow at approximately 45° to the direction of applied load (following the line of maximum shear stress). After traversing two or three grain boundaries, its direction changes and then propagates at approximately 90° to the direction of the applied load. These are known as Stage I and Stage II cracks and are illustrated below. Furthermore, we now know that fatigue cracks develop and grow as a result of very localised plastic shear strains on a microscopic level.

SN Curve EN Curve

Low Cycle Region (EN Method)

High Cycle Region (SN or EN Method)

Infinite life region???

Endurance Limit

Transition

The above graphic shows stage I and stage II crack growth When August W?hler pioneered the first fatigue analysis method (SN), he was unaware of this two-stage crack growth process. Therefore the SN method traditionally includes both stages. In actual fact, each stage involves a different physical mechanism and today we usually use different analysis techniques for each. The EN (or local strain) method is used to calculate the time taken for Stage I crack growth, while we usually employ a fracture mechanics approach to calculate Stage II growth.

59

Tutorial 5— Strain life (SN) fatigue analysis in GlyphWorks

Topic 2—EN Fatigue Life Estimation
In this topic we will carry out a simple EN fatigue analysis to estimate the life of our component. 1. 2. Start a new worksheet by selecting ‘File | New’ from the menu Drag the strain based time series file created in tutorial 3, ‘sg1_strain.dac’ onto the workspace. Drag the Strain Life (EN) Glyph on to the output pad of the time series input glyph. Drag a Meta Data Display glyph on to the output pad of the EN glyph. 5.

Change its properties to Display Type = Results, Collate Tests = True. Right click on the Strain Life EN Glyph and select properties from the menu. Set the properties MaterialDataSource = MDM_Database, click on the Materials tab and select the material ‘RQC100’. Set the MeanStressCorrection = Morrow. Run the analysis and note the results. Now consider a sensitivity study on all the input parameters. You can follow the notes in Tutorial 4 if you need assistance. How do these results compare with the SN based analysis?

6.

3.

4.

The Morrow mean stress correction is very similar in its implementation to the Goodman approach used in the SN method. Notice the similarity between the EN based life result of 3.96E6 repeats to failure and that obtained with the SN method of 3.34E6. In terms of fatigue lives, this 15% discrepancy is negligible as the real scatter of lives is usually greater than this. The similarity between the two results arises here because the rainflow cycles all lie in the high cycle damage region where both SN and EN methods are valid. Have you noticed how similar the SN and EN Glyphs are in operation? Try rerunning some of the sensitivity studies we did earlier. ? ? ? ? Re-run the analysis with scale factors of 1.1 and 0.9 Re-run the analysis with different surface finishes and treatments Re-run the analysis with mean stress corrections set to: SWT and NoCorrection respectively Re-run the analysis with a back calculation target life of 10,000 repeats

Close the worksheet when you have finished. You can save it if you want to re-use it later.

60

Tutorial 5— Strain life (EN) fatigue analysis in GlyphWorks

61

End Note:
Thank you for completing these tutorials. I hope you have enjoyed working through the QuickStart guide and it has given you the confidence to use GlyphWorks with your own data. The online documentation contains more technical information on the theory and use of each Glyph. You can find information about this on page 6 of this guide. If you require any further help or wish to talk with an nCode engineer, then please contact us directly. You can find your nearest contact on our web site at www.ncode.com If you are interested in attending one of our training courses then please do not hesitate to contact us. Training courses are offered in both engineering theory and software usage.

62

63

nCode International Ltd
230 Woodbourn Road Sheffield S9 3LQ Phone: +44 (0)114 275 5292 Fax: +44 (0)114 275 8272 E-mail: support@ncode.co.uk


相关文章:
更多相关标签: