R plot title cex exchange rate


Saving this as a CSV file wwe then read the saved file into our R workspace. We then create a raw time series object using the ts function where rownames are dates, select some data, and calculate growth rates. This will allow us and plotting functions to use the dates to index the data. Again we make use of the diff log data vector calculation. This means that the standard deviation and mean change as well and higher moments such as skewness and kurtosis.

There is trend in the level and simply dampened sinusoidal in the rate. In a nutshell we observe several distributions mixed together in this series. This will occur again in the term structure of interest rates where we will use splines and their knots to get at parameterizing the various distributions lurking just beneath the ebb and flow of the data.

What do we think is going on? There are several significant autocorrelations within the last 4 quarters. Partial autocorrelation also indicates some possible relationship 8 quarters back. In this world we think there is a regression that looks like this:. The order is 2 lags of rates, 1 further difference already differenced once when we calculated diff log GNP , and 1 lag of residuals.

What are the results? The qqnorm function plots actual quantiles against theoretical normal distributions of the quantiles.

A line through the scatterplot will reveal deviations of actual quantiles from the normal ones. Those deviations are the key to understanding tail behavior, and thus the potential influence of outliers, on our understanding of the data. How can we begin to diagnose the GNP residuals? We find that the series is very thick tailed and serially correlated as evidenced by the usual statistical suspects.

But no volatility clustering. This will help us understand the time series aspects of the volatility of the GNP residuals. The residuals are positively skewed and not so thick tailed, as the normal distribution has by definition a kurtosis equal to 3.

Now for something really interesting, yet another rendering of the notorious Efficient Markets Hypothesis. Our goal is to infer the significance of a statistical relationship among variates.

Our strategy is to change the data repeatedly, and re-estimate a relationship. The data is sampled using the replicate function, and the sample ACF is computed. Here is a plot of the distribution of the sample means of the one lag correlation between successive returns. That was a mouthful! When we think of inference, we first identify a parameter of interest, and its estimator. That parameter is the coefficient of correlation between the current return and its 1-period lag.

We estimate this parameter using the history of returns. Here we plot the simulated density and lower and upper quantiles, along with the estimate of the lag-1 coefficient:. We showed how to pull data from Yahoo! We characterized several stylized facts of financial returns and inferred behavior using a rolling correlation regression on volatility.

We then supplemented the ordinary least square regression confidence intervals using the entire distribution of the data with quantile regression. We also built Using bootstrapping techniques we simulated coefficient inference to check the efficient markets hypothesis.

This, along with the quantile regression technique, allows us to examine risk tolerance from an inference point of view. In this chapter we touch on the voluminous topic of time series analysis. We will use the following rubric to assess our performance in producing analytic work product for the decision maker. The text is laid out cleanly, with clear divisions and transitions between sections and sub-sections. The writing itself is well-organized, free of grammatical and other mechanical errors, divided into complete sentences, logically grouped into paragraphs and sections, and easy to follow from the presumed level of knowledge.

All numerical results or summaries are reported to suitable precision, and with appropriate measures of uncertainty attached when applicable. All figures and tables shown are relevant to the argument for ultimate conclusions. Figures and tables are easy to read, with informative captions, titles, axis labels and legends, and are placed near the relevant pieces of text. The code is formatted and organized so that it is easy for others to read and understand.

It is indented, commented, and uses meaningful names. It only includes computations which are actually needed to answer the analytical questions, and avoids redundancy.

Code borrowed from the notes, from books, or from resources found online is explicitly acknowledged and sourced in the comments. Functions or procedures not directly taken from the notes have accompanying tests which check whether the code does what it is supposed to. Model specifications are described clearly and in appropriate detail. There are clear explanations of how estimating the model helps to answer the analytical questions, and rationales for all modeling choices. If multiple models are compared, they are all clearly described, along with the rationale for considering multiple models, and the reasons for selecting one model over another, or for using multiple models simultaneously.

The actual estimation and simulation of model parameters or estimated functions is technically correct. All calculations based on estimates are clearly explained, and also technically correct.

All estimates or derived quantities are accompanied with appropriate measures of uncertainty. The substantive, analytical questions are all answered as precisely as the data and the model allow.

The chain of reasoning from estimation results about the model, or derived quantities, to substantive conclusions is both clear and convincing. If uncertainties in the data and model mean the answers to some questions must be imprecise, this too is reflected in the conclusions.

All sources used, whether in conversation, print, online, or otherwise are listed and acknowledged where they used in code, words, pictures, and any other components of the analysis. Concepts, Techniques and Tools. Ruppert, David and David S. A Practice Manual Using R. Chapter 4 Macrofinancial Data Analysis 4. Now that everyone has recovered from this coup, your management wants you to Retrieve and begin to analyze data about the Spanish economy Compare and contrast Spanish stock market and government-issued debt value versus the United States and several other countries Begin to generate economic scenarios based on political events that may, or may not, happen in Spain Up to this point we had reviewed several ways to manipulate data in R.

What decision s are we making? What are the key business questions we need to support this decision? What data do we need? What tools do we need to analyze the data? How do we communicate answers to inform the decision? Now we consider data we might need to answer one of those questions and choose from this set: GDP, inflation, wages, population Financial data: Our decision is supply a new market segment Product: How would the performance of these companies affect the size and timing of orders?

How would the value of their products affect the value of our business with these companies? We are a US functional currency firm see FAS 52 , so how would we manage the repatriation of accounts receivable from Spain? These are discrete percentage changes that are similar, but not quite the same, as the continuous using log version.. Note the use of the indexing of price to eliminate the last price, since what we want to compute is:. We restate the definition of the price of a zero-coupon bond.

If the bond has coupons we can consider each of the coupon payments as a mini-zero bond. Rearranging with some creative algebra: We note that the continuous time yield is always less than the discrete rate because there are so many more continuous compounding periods. We recall perhaps not too painfully! Here we integrate the forward rate to get. Before we go any further we will procure some term structure data to see more clearly what we have just calculated.

What does the data look like? We run this code chunk to get a preliminary view of this data. A simple plot is in order now. The equation is translated into R with the diff function. We will use vector t later when we plot our models of the forward curve. Definitely view it from the console and check out?? Then length returns the number of maturities in the dat data frame. We can go to http: This is a natural knot. Thus the possibility of a need for a spline.

Back to the data: We put the R version of the bond price into the nls function, along with a specification of the data frame dat and starting values. The dependent variable is price. Our first task is to parse the coefficients from the nls spline fit and build the spline prediction. Here we construct the forward rate spline across T maturities.

Second, pull the coefficients from a summary of the fit. Compare the quadratic spline we just constructed with a pure quadratic polynomial. Simply take the knot out of the nls formula and rerun. This estimate gives us one quadratic function through the cloud of zero-coupon price data. The pure quadratic model produces a higher standard deviation of error than the quadratic spline. We will run this code to set up the data for a plot. First some calculations based on the estimations we just performed.

The pure quadratic forward curve seems to dramatically underfit the maturities higher than 15 years. Using a knot at the right maturity adds a boost to the reduction of error in this regression. That means that predictions of future potential term structures will apt to be more accurate than the null hypothesis of no knot.

This chapter covers the fundamentals of bond mathematics: Using this background built two models of the forward curve and then implemented these models in R with live data. In the process We also learned something about the nonlinear least squares method and some more R programming to visualize results. Ruppert and Matteson, pp. They use the termstr package from CRAN to illustrate term structure models. We will use the following rubric to assess our performance in producing analytic work product for the decision maker.

The text is laid out cleanly, with clear divisions and transitions between sections and sub-sections.

The writing itself is well-organized, free of grammatical and other mechanical errors, divided into complete sentences, logically grouped into paragraphs and sections, and easy to follow from the presumed level of knowledge. All numerical results or summaries are reported to suitable precision, and with appropriate measures of uncertainty attached when applicable.

All figures and tables shown are relevant to the argument for ultimate conclusions. Figures and tables are easy to read, with informative captions, titles, axis labels and legends, and are placed near the relevant pieces of text. The code is formatted and organized so that it is easy for others to read and understand.

It is indented, commented, and uses meaningful names. It only includes computations which are actually needed to answer the analytical questions, and avoids redundancy. Code borrowed from the notes, from books, or from resources found online is explicitly acknowledged and sourced in the comments. Functions or procedures not directly taken from the notes have accompanying tests which check whether the code does what it is supposed to. Model specifications are described clearly and in appropriate detail.

There are clear explanations of how estimating the model helps to answer the analytical questions, and rationales for all modeling choices. If multiple models are compared, they are all clearly described, along with the rationale for considering multiple models, and the reasons for selecting one model over another, or for using multiple models simultaneously.

The actual estimation and simulation of model parameters or estimated functions is technically correct. All calculations based on estimates are clearly explained, and also technically correct. All estimates or derived quantities are accompanied with appropriate measures of uncertainty. The substantive, analytical questions are all answered as precisely as the data and the model allow. The chain of reasoning from estimation results about the model, or derived quantities, to substantive conclusions is both clear and convincing.

If uncertainties in the data and model mean the answers to some questions must be imprecise, this too is reflected in the conclusions. All sources used, whether in conversation, print, online, or otherwise are listed and acknowledged where they used in code, words, pictures, and any other components of the analysis.

Concepts, Techniques and Tools.