## Logistic regression is the preferred method for examining this type of data. These methods are different from what we have seen so far. At the same time, you will recognize a lot of similar features.                                                                                                                                              Let's being with an example of the type of data that lends itself to this analysis. Consider the experimental data summarized in Table. There were six large jars, each containing a number of beetles and carefully measured small amount of insecticide. After a specified amount of time, the experimenters examined the number of beetles that were still alive. We can calculate the empirical death rate for each jar's level of exposure the insecticide. These are given in the last row of Table  using                                                                                      Mortality rate =   Number died /Number exposed                                                     How can we develop statistical models to describe this data? We should immediately recognize that within each jar the outcomes are binary valued: alive or dead. We can probably assume that these events are independent of each other. The counts of alive or dead should then follow the binomial distribution. (See for a quick review of this important statistical model.) There are six separate and independent binomial experiments in this example. The n parameters for the binomial models are the numbers of insects in each jar. Similarly, the p parameters represent the mortality probabilities in each jar. The aim is to model the p parameters of these six binomial experiments. Any models we develop for this data will need to incorporate the various insecticide does levels to describe the mortality probability p in each jar. The alert reader will notice that the empirical mortality rates given in the last row of Table  are not monotonically increasing with increasing exposure levels of the insecticide. Despite this remark, there is no reason for us ton fit a nonmonotonic model to these data.    The aim of this chapter is to explain models for the different values of the p parameters using the values of the doses of the insecticide in each jar. The first idea that comes to mind is to treat the outcomes as normally distributed. All of the ns are large, so a normal approximation to the binomial distributions should work. We also already know a lot abort linear regression, so we are tempted to fit the linear model

What is wrong with this approach? It seems simple enough, but remember that p must always be between 0 and 1. There is no guarantee that at extreme values of the Does, this straight line model would result in estimates of p that are less than 0 or greater than 1. We would expect a good model for these data to always give an estimate of p between 0 and 1. A second but less striking problem is that the variance of the binomial distribution is not the same for different values of the p parameter so the assumption of constant variance is not valid for the usual linear regression

## If you are reader,then you must  to read these books.1. 1984 by George Orwell,This high school classic is well worth  a re-read.People who are either just entering or getting settled into their careers are likely to relate a little differently 1984 than they might have at 16 or 17.It is story of power and brutality and will resonate with readers as power structures start to became more visible in a newly-employed person's life.                                                                                                    2.Beijing Bastard; into the wilds of a changing china by Val Wang.Aside from difficulties in scaling the career, many quarter-life cries are spurred on by a flimsy sense of self.Wang's memoir about finding her identity in New York after moving from China is sure to connect with soul-searchers in librarians say.                                                                                                              3. Between the World and Me by Ta-Nehist Coates New York Times critic A O Scott says this book is 'essential" like water or air Winner of the national book award,Coates novel reveals a perspective on race in America that challenges everyone to reckon with the country's brutal past-both recent and distant.The book can help people just who are learning about their place in the world gain new insight.

4. Feminism Reinventing the F-word by Nadia Abushanab Higgins this book wants to make it less uncomfortable to be a feminist.The librarians say the book is essential for people moving into the wider world because so many communities are simply too diverse for regressive mindsets about equality.                                                                                                      5. Tiny beautiful Things by Cheryl Strayed  Cheryl Strayed, the ones-anonymous advice columnist,has compiled pieces of wisdom into a book the librarians call"the best" most compassionate advice about being a fully realized,empathic person in the world". At its heart,Tiny beautiful Things is a reminder that life is fraught with uncertanity and that what we call "quarter-life crises" might just be the first in a series of opportunities to ask for help.

## The medians test and the examination of the math test scores in Section reduce every observation of a coin toss. Specifically, the medians test judges every observation as being either above or below the pooled sample median. The actual magnitude of every observation is lost. It does not matter how far above or below the median an observation is. Does this seem like a tremendous loss of information? Rank methods meet this loss halfway: Instead of reducing all observations to binary above\below status, rank methods replace the actual observations with their order when the data is sorted.                                                                                                              To illustrate this method, consider the plant growth data from Table . All 20 of the pooled sample values are sorted in Table  from smallest to largest and identified as belonging to either the control or the treated groups. Each observation is also identified with its rank or order number, from 1 to 20 in terms of the pooled sampled. So, for example, the four smallest observation (ranked 1-4) are associated with the control group, and then the next two smallest observations (ranked 5 and 6) are in the treated group.

In rank methods, we replace the observations by their ranks. A large observations achieves a high rank, unlike the medians test where all observations larger than the median are treated equally.   In Table , if  both the treated and the control plants had roughly the same means, then we would also expect these two samples to have roughly the same average rank values.                                           We will return to this example and the SAS program for the rank test in Section . Before we get to that, let us again illustrate the use of ranked data and show that this is a frequently used approach to describing data that is sometimes qualitative, rather than quantitative, in nature.

## Correlation

The correlation coefficient is a single-number summary expressing the utility of linear regression. the correlation coefficient is a dimensionless number between - 1 and + 1. The slope and the correlation have the same positive or negative sign. This single number is used to convey the strength of a linear relationship, so values closer to - 1 or + 1 indicate greater fidelity to a straight-line relationship.                                                                                         The correlation is standardized in the sense that its value does not depend on the means or standard deviations of the x or y values.If we add or subtract the same values from the data (and thereby change the means ),the correlation remains the same.If we multiply all the xs (or the ys)by some positive value,the correlation remains the same.If we multiply either the xs or the ys by a negative number, the sign of the correlation will reverse.                                  As with any oversimplification of a complex situation, the correlation coefficient has its benefits, but also its shortcomings. A variety of values of the correlation are illustrated. Each of these separate graphs consists of 50 simulated pairs of observations. A correlation of 0 in the upper left of no indication of a linear relationship between the plotted variables. A correlation of 0.4 does not indicate much strength, either A correlation of either 0.8 or-0.9 indicates a rather strong linear trend.

## We have seen this curious expression in two setting now in the use of t-test and when using the Chi-squared test.What exactly are degrees of freedom,anyway?                                                                         More specifically, let's look at to situations where these words come up.The expression

(where x mean is the average of the xs),is associated with - 1 degree of freedom. Similarly,in a 2 * 2 table of counts,we always say the chi-squared statistic has one degree  of freedom .The general rule is: Degree of freedom the number of data points to which you can assign any value.Let's see how to apply this rule, when we look at the expression sum(xi-mean x)^2, there are N terms in the sum. Each term is a squared difference between an observation x and the average x mean of all the xs. Let us look at these differences and write them down.                                                                                              We have  d1=xi- mean x , d2=x2-mean x.........dN-mean x.
There are differences di, but notice that these must always sum to 0. Adding up all of the value on the right-hand sides gives a sum of the xs minus N times there average. The d must sum to 0 no matter what values the x's are.                                                                                                    So how many differences d can we freely chose to be any values we want, and still have them add up to 0? The answer is all of them, except for the  last one. The  last one  is determined by all of the others so that they all sum to 0.Notice also that it does not matter which d we call the "last". We can freely choose , N-1 values and the one remaining value is determined by all of the others. Now let us examine the use of degree of freedom when discussing the chi-squared test, As an example, let us return to the data given below.

The chi-squared statistic measures the association of rose and columns. In the present example, is association of interest is between developing lung cancer and exposure to the fungicide. The test is independent of the numbers of mice allocated to exposure or not and the numbers of mice that eventually develop tumors or not . The significance level of the test should only reflect the "inside" counts of this table and not these marginal total.

## Wednesday, 22 March 2017

### Statistics

What is Statistics?

In an effort to present a lot of mathematical formulas, we sometimes lose track of the central idea of the discipline. It is important to remember the big picture when we get too close to the subject.                                                                                                 Let us consider a vast wall that separates our lives from the place where the information resides.It is impossible to see over or around this wall, but every now and then we have the good fortune of having some pieces of data thrown over to us. On the basis of this fragmentary sampled data, we are supposed to infer the composition of the remainder on the other side. This is the aim of statistical inference.                                                                                                                                                                                           The population is usually vast and infinite, whereas the sample is just a handful of numbers.                                                                                                                                                                       There is an enormous possibility for error, of course. If all of the left- handed people. I know also have artistic ability, am I allowed to generalize this to a statement that all left-handed people.In this case I do not have much  data to make my claim ,and my statement should reflect a large possibility of error .Maybe most of my friends are also artists .In this case we say that the sample data is biased because it contains more artists then would be found in a representative sample of  the population. The separate concepts of sample and population for numerically valued data. The sample average is a number that we use to infer the value of the population mean. The average of several numbers is itself a number that we obtain. The population mean, however is on the other side of the imaginary wall and is not observable. In fact, the population mean is almost an unknowable quantity that could not be observed even after a lifetime of study.                                                                                                                                                                                    Statistical inference is the process of generalizing from a sample of data to the larger population. The sample average is a simple statistic that immediately comes to mind. The student t-test is the principal method used to make inferences about the population mean on the basis of the sample average.

## Inverse Relation                                                                                  Let R be a relation from A to B. The inverse relation of R,denoted by  R^(-1)   is a relation from B to A and is denoted by

R^(-1)={( y, x) : x ɛ A, y ɛ B,(x, y) ɛ R}.                                                                                       In other words, the inverse relation is obtained by reversing each of the ordered pairs belong to R Thus,                           (x,y) ɛ R <=> (y, x) ɛ R^(-1)                                                                                           Evidently, the range of R is the domain of R and vice-versa. If A=B, then R and R are both relations on A.

## Monday, 20 March 2017

### THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY

While studying Analytical Geometry in two dimensions, and the introduction to three dimensional geometry, we confined to the Cartesian methods only. We will now use vector algebra to three dimensional geometry the purpose of this approach to 3-dimensional geometry is that it makes the study simple and elegant *.
In this chapter, we shall study the direction cosines and direction ratios of a line joining two points and also discuss about the equations of lines and planes in space under different conditions, angel between two lines, two planes, a line and a plane, shortest distance between two skew lines and distance of a point from a plane.Most of the above results are obtained in vector from. Nevertheless, we shall also translate these results in the Cartesian from which, at times, presents a more clear geometric and analytic picture of the situation.

## Most real-life problems when formulated as an LP model have more than two variables and are too large for the graphical solution method. Thus, we need a more efficient method to suggest an optimal solution for such problems. In this chapter, we shall discuss a procedure called the simplex method for solving an LP model of such problems. The method was developing by G B Dantzig in 1947.                                                                                             The simplex is an important term in mathematics that represents an object in n-dimensional space connecting n + 1 points. In one dimension, a simplex is a line segment connecting two points; in two dimensions, it is a triangle formed by joining three points; in three dimensions, it is a four-sided pyramid having four corners.                               The concept of the simplex method is similar to the graphical method. In the graphical method, extreme points of the feasible solution space are examined to search for an optimal solution at one of them. For LP problems with several variables, we may not be able to graph the feasible region, but the optimal solution will still lie at an extreme point of the many-sided, multidimensional figure [called an n-dimensional polyhedron] that represents the feasible solution space. The simplex method examines the extreme points in a systematic manner, repeating the same set of steps of the algorithm until an optimal solution is reached. It for this reason that it is also called the iterative method.

Since the number of extreme points [corners or vertices] of feasible solution space and finite, the method assures improvement in the value of the objective function as we move from one iteration [extreme point] to another and achieve the optimal solution in a finite number of steps and also indicates when an unbounded is reached.

## The triangle of Mars

The triangle of Mars is formed by the lines of life, head, and the hepatica. The shape and the position of the grate triangle must be considered by themselves, although it contains the upper, the middle, and the lower angle, which three points will be dealt with later. when the triangle is well-formed by the lines of the head, life, and health, it should be broad and enclose the entire plain of Mars. In such case it denotes breadth of views, liberality, and generosity of spirit, such a person will be inclined to sacrifice himself to further the interests of the whole, not the unit.                                                                                                  If, on the contrary, it is formed by three small wavy, uncertain lines, It denotes timidity of spirit, meanness, and cowardice. Such a man would always go with the majority, even against his principles. When the second formation of the triangle it has for its base the line of the sun, the subject will then have narrow ideas but great individuality and strong resolution. Such a sign, from the very qualities it exhibits, contains within itself the seeds of worldly success.

## Tuesday, 7 March 2017

### THEORY OF FLUXIONS

THEORY OF FLUXIONS                                                                                                       The origin of the  integral Calculus goes back  to the early period  of development of Mathematics .The great  development of method of exhaustion in the early  period was obtained  in the worksof Eudoxus (440 B.C) and Archimedes(330 B.C).The theory of calculus began in the 17th century .In 1965,Newton began his work on the Calculus described by him as the theory of fluxions and used his theory in finding the tangent and radius of curvature at any point on a curve. Newton introduced the basic notion of inverse function called td by he anti derivative or the inverse method of tangents.                                                                                        The theory of calculus began in the 17th century.In 1665,Newton began his work on the calculus describe by him as the theory of fluxions and used his theory in finding the tangent and radius of curvature at any point on a curve.

### Latin Cubes

What is Latin Cubes? The practical applications of Latin crops and related designs are factorial experiment.factorial experiments for me...