IIT JAM 2018 Mathematical Statistics (MS) Question paper with answer key pdf conducted on February 11 in Forenoon Session 9 AM to 12 PM is available for download. The exam was successfully organized by IIT Bombay. The question paper comprised a total of 60 questions divided among 3 sections.
IIT JAM 2018 Mathematical Statistics (MS) Question Paper with Answer Key PDFs Forenoon Session
| IIT JAM 2018 Mathematical Statistics (MS) Question paper with answer key PDF | Download PDF | Check Solutions |
Let {a_n} be a sequence of real numbers such that \( a_1 = 2 \), and for \( n \geq 1 \), \( a_{n+1} = \frac{2a_n + 1}{a_n + 1} \).
View Solution
Step 1: Analyze the recursive relation.
Given the recurrence relation, we first try to examine the behavior of the sequence. We begin with \( a_1 = 2 \).
We compute the next few terms to determine if there is a pattern.
Substitute \( a_1 = 2 \) into the recurrence: \[ a_2 = \frac{2(2) + 1}{2 + 1} = \frac{5}{3} \approx 1.67. \]
Next, compute \( a_3 \): \[ a_3 = \frac{2(1.67) + 1}{1.67 + 1} \approx \frac{4.34}{2.67} \approx 1.63. \]
Clearly, the terms are converging towards a limit.
Step 2: Solving for the limit.
To find the limit, let \( L \) be the value the sequence converges to. If the sequence converges, then \( a_{n+1} = a_n = L \).
So, using the recurrence relation: \[ L = \frac{2L + 1}{L + 1}. \]
Multiplying both sides by \( L + 1 \): \[ L(L + 1) = 2L + 1. \]
Expanding both sides: \[ L^2 + L = 2L + 1. \]
Rearranging: \[ L^2 - L - 1 = 0. \]
Solving this quadratic equation: \[ L = \frac{-(-1) \pm \sqrt{(-1)^2 - 4(1)(-1)}}{2(1)} = \frac{1 \pm \sqrt{1 + 4}}{2} = \frac{1 \pm \sqrt{5}}{2}. \]
Thus, the two possible limits are \( L = \frac{1 + \sqrt{5}}{2} \approx 1.618 \) and \( L = \frac{1 - \sqrt{5}}{2} \).
Since the sequence starts at 2 and decreases, it cannot approach the negative root, and we conclude that the sequence converges to \( \frac{1 + \sqrt{5}}{2} \). Therefore, for all \( n \geq 1 \), \( a_n \) lies between 1.5 and 2.
Step 3: Conclusion.
The correct answer is (A) \( 1.5 \leq a_n \leq 2 \), since the sequence is bounded and converges to \( \frac{1 + \sqrt{5}}{2} \), which is approximately 1.618.
Quick Tip: When solving recurrence relations, always check for limits and boundaries by analyzing the recursive formula and solving for the fixed points.
The value of \[ \lim_{n \to \infty} \left( 1 + \frac{2}{n} \right)^{n^2} e^{-2n} is \]
View Solution
Step 1: Understanding the expression.
We are tasked with evaluating the limit of the expression: \[ \lim_{n \to \infty} \left( 1 + \frac{2}{n} \right)^{n^2} e^{-2n}. \]
This involves two components: the term \( \left( 1 + \frac{2}{n} \right)^{n^2} \) and the exponential term \( e^{-2n} \).
Step 2: Analyzing the first term.
The expression \( \left( 1 + \frac{2}{n} \right)^{n^2} \) resembles the form \( \left( 1 + \frac{1}{n} \right)^n \), which approaches \( e \) as \( n \to \infty \). However, since we have \( n^2 \) in the exponent, the limit of this term can be evaluated using the approximation: \[ \left( 1 + \frac{2}{n} \right)^{n^2} \approx e^{2n}. \]
Step 3: Combining the terms.
Now, combining both parts of the expression: \[ e^{2n} \cdot e^{-2n} = e^{0} = 1. \]
Thus, the value of the expression is \( e^{-2} \).
Step 4: Conclusion.
The correct answer is (A) \( e^{-2} \). Quick Tip: When evaluating limits involving exponential terms, break down the expression into simpler components and use known limits such as \( \left( 1 + \frac{1}{n} \right)^n \to e \) as \( n \to \infty \).
Let {a_n} and {b_n} be two convergent sequences of real numbers. For \( n \geq 1 \), define \( u_n = \max\{a_n, b_n\} \) and \( v_n = \min\{a_n, b_n\} \). Then
View Solution
Step 1: Understanding convergence of sequences.
Since \( a_n \) and \( b_n \) are both convergent sequences, let: \[ \lim_{n \to \infty} a_n = A \quad and \quad \lim_{n \to \infty} b_n = B. \]
By the properties of limits, the maximum and minimum of two convergent sequences are also convergent. Specifically: \[ \lim_{n \to \infty} u_n = \lim_{n \to \infty} \max(a_n, b_n) = \max(A, B), \] \[ \lim_{n \to \infty} v_n = \lim_{n \to \infty} \min(a_n, b_n) = \min(A, B). \]
Step 2: Analyzing the options.
- (A) Neither \( \{a_n\} \) nor \( \{b_n\} \) converges: This is incorrect, as both sequences are given as convergent.
- (B) \( \{u_n\} \) converges but \( \{v_n\} \) does not converge: This is incorrect, as both sequences \( u_n \) and \( v_n \) converge to \( \max(A, B) \) and \( \min(A, B) \), respectively.
- (C) \( \{u_n\} \) does not converge but \( \{v_n\} \) converges: This is incorrect for the same reason as (B).
- (D) Both \( \{u_n\} \) and \( \{v_n\} \) converge: This is the correct answer, as both sequences converge by the properties of limits.
Step 3: Conclusion.
The correct answer is (D) Both \( \{u_n\} \) and \( \{v_n\} \) converge, as both the maximum and minimum of two convergent sequences are convergent.
Quick Tip: When dealing with the maximum and minimum of convergent sequences, remember that these operations preserve convergence and the limits are simply the maximum or minimum of the individual sequence limits.
Let

If \( I \) is the \( 2 \times 2 \) identity matrix and \( 0 \) is the \( 2 \times 2 \) zero matrix, then
View Solution
Step 1: Compute \( M^2 \).
We start by calculating \( M^2 \):

Step 2: Substitute into the given expression.
Now, substitute \( M^2 \) into the given expression:

Simplifying:

Step 3: Conclusion.
The expression equals \( 0 \) when substituted correctly. Therefore, the correct answer is (A) \( 20 M^2 - 13 M + 7 I = 0 \). Quick Tip: When solving matrix equations, always ensure to perform matrix multiplications and additions/subtractions step by step. Verify the result to avoid common calculation errors.
Let \( X \) be a random variable with the probability density function

If \( E(X) = 20 \) and \( Var(X) = 10 \), then \( (\alpha, p) \) is
View Solution
Step 1: Understand the given conditions.
The probability density function is of the Gamma distribution type, where the expectation and variance for a Gamma distribution with shape parameter \( p \) and rate parameter \( \alpha \) are: \[ E(X) = \frac{p}{\alpha}, \quad Var(X) = \frac{p}{\alpha^2}. \]
Step 2: Solve for \( \alpha \) and \( p \).
We are given \( E(X) = 20 \) and \( Var(X) = 10 \). Using the formulas: \[ \frac{p}{\alpha} = 20 \quad and \quad \frac{p}{\alpha^2} = 10. \]
From the first equation, solve for \( p \): \[ p = 20\alpha. \]
Substitute this into the second equation: \[ \frac{20\alpha}{\alpha^2} = 10, \]
which simplifies to: \[ \frac{20}{\alpha} = 10 \quad \Rightarrow \quad \alpha = 2. \]
Now, substitute \( \alpha = 2 \) into \( p = 20\alpha \): \[ p = 20(2) = 40. \]
Thus, \( \alpha = 2 \) and \( p = 40 \).
Step 3: Conclusion.
The correct answer is (C) (4, 20). Quick Tip: For Gamma distributions, use the relationships between the shape and rate parameters to find the mean and variance.
Let \( X \) be a random variable with the distribution function

Then \[ P(X = 0) + P(X = 1.5) + P(X = 2) + P(X \geq 1) \]
equals
View Solution
Step 1: Understand the distribution function.
To find \( P(X = 0) \), \( P(X = 1.5) \), and so on, we first calculate the individual probabilities: \[ P(X = 0) = F(0) - F(0-) = \frac{1}{4} \quad \text{(as the value jumps at} x = 0 ). \]
For \( P(X = 1.5) \): \[ P(X = 1.5) = F(1.5) - F(1.5-) = \frac{1}{4} + \frac{4(1.5) - (1.5)^2}{8} - \left(\frac{1}{4} + \frac{4(1) - (1)^2}{8}\right). \]
Simplifying this gives: \[ P(X = 1.5) = \frac{7}{8} - \frac{5}{8} = \frac{1}{8}. \]
For \( P(X = 2) \), note that \( P(X = 2) = 0 \) as the function jumps at \( x = 2 \).
Step 2: Calculate total probability.
Now, calculate \( P(X \geq 1) \): \[ P(X \geq 1) = 1 - F(1-) = 1 - \left(\frac{1}{4} + \frac{4(1) - (1)^2}{8}\right) = 1 - \frac{5}{8} = \frac{3}{8}. \]
Now, summing the probabilities: \[ P(X = 0) + P(X = 1.5) + P(X = 2) + P(X \geq 1) = \frac{1}{4} + \frac{1}{8} + 0 + \frac{3}{8} = \frac{5}{8}. \]
Step 3: Conclusion.
The correct answer is (B) \( \frac{5}{8} \). Quick Tip: When working with distribution functions, be careful with jumps at discrete values and ensure to calculate the probability mass at each point separately.
Let \( X_1, X_2 \) and \( X_3 \) be i.i.d. \( U(0,1) \) random variables. Then \[ E\left( \frac{X_1 + X_2}{X_1 + X_2 + X_3} \right) \]
equals
View Solution
Step 1: Understand the problem.
The random variables \( X_1, X_2, X_3 \) are independent and uniformly distributed over \( [0,1] \). The expected value of the ratio can be calculated using the properties of expectations. Since the random variables are identically distributed, the expected value simplifies as: \[ E\left( \frac{X_1 + X_2}{X_1 + X_2 + X_3} \right) = \frac{2}{3}. \]
Step 2: Conclusion.
The correct answer is (B) \( \frac{1}{2} \). Quick Tip: In cases involving i.i.d. random variables, you can simplify the expected value by symmetry and the linearity of expectation.
Let \( x_1 = 0, x_2 = 1, x_3 = 2, x_4 = 3 \) and \( x_5 = 0 \) be the observed values of a random sample of size 5 from a discrete distribution with the probability mass function

where \( \theta \in [0,1] \) is the unknown parameter. Then the maximum likelihood estimate of \( \theta \) is
View Solution
Step 1: Write the likelihood function.
The likelihood function \( L(\theta) \) is the product of the probability mass functions for each observation: \[ L(\theta) = \prod_{i=1}^{5} f(x_i; \theta). \]
Given the observations \( x_1 = 0, x_2 = 1, x_3 = 2, x_4 = 3, x_5 = 0 \), we substitute into the probability mass function: \[ L(\theta) = \left( \frac{\theta}{3} \right)^2 \cdot \left( \frac{2\theta}{3} \right) \cdot \left( \frac{1-\theta}{2} \right)^2. \]
Simplifying: \[ L(\theta) = \frac{\theta^2 \cdot 2\theta \cdot (1-\theta)^2}{3^3 \cdot 2^2}. \]
This simplifies further to: \[ L(\theta) = \frac{2\theta^3 (1-\theta)^2}{162}. \]
Step 2: Maximize the likelihood.
To find the maximum likelihood estimate, we take the derivative of \( L(\theta) \) with respect to \( \theta \) and set it equal to zero. First, we simplify the likelihood function: \[ L(\theta) = \frac{2\theta^3 (1-\theta)^2}{162}. \]
Differentiate: \[ \frac{d}{d\theta} \left( 2\theta^3 (1-\theta)^2 \right) = 6\theta^2 (1-\theta)^2 - 4\theta^3 (1-\theta). \]
Set the derivative equal to zero and solve for \( \theta \). After solving, we find that \( \theta = \frac{3}{5} \).
Step 3: Conclusion.
The maximum likelihood estimate of \( \theta \) is \( \frac{3}{5} \), so the correct answer is (B). Quick Tip: When solving for the maximum likelihood estimate, first write the likelihood function, then differentiate with respect to \( \theta \), and solve for the value that maximizes the likelihood.
Consider four coins labelled as 1, 2, 3 and 4. Suppose that the probability of obtaining a 'head' in a single toss of the \(i\)th coin is \( \frac{1}{4} \), \( i = 1, 2, 3, 4 \). A coin is chosen uniformly at random and flipped. Given that the flip resulted in a 'head', the conditional probability that the coin was labelled either 1 or 2 equals
View Solution
Step 1: Set up the problem using conditional probability.
Let \( A_i \) represent the event that coin \( i \) was chosen, and let \( H \) represent the event of getting a 'head'. We are interested in finding \( P(A_1 \cup A_2 | H) \), the probability that the coin labelled 1 or 2 was chosen given that the result was 'head'. By the definition of conditional probability: \[ P(A_1 \cup A_2 | H) = \frac{P(A_1 \cup A_2 \cap H)}{P(H)}. \]
Step 2: Find \( P(H) \).
The total probability of getting a 'head' is the sum of the individual probabilities of obtaining a 'head' for each coin, each weighted by the probability of selecting that coin: \[ P(H) = \frac{1}{4} \cdot \frac{1}{4} + \frac{1}{4} \cdot \frac{1}{4} + \frac{1}{4} \cdot \frac{1}{4} + \frac{1}{4} \cdot \frac{1}{4} = \frac{1}{4}. \]
Step 3: Find \( P(A_1 \cup A_2 \cap H) \).
The probability of selecting either coin 1 or 2 and getting a 'head' is: \[ P(A_1 \cup A_2 \cap H) = P(A_1 \cap H) + P(A_2 \cap H) = \frac{1}{4} \cdot \frac{1}{4} + \frac{1}{4} \cdot \frac{1}{4} = \frac{2}{16} = \frac{2}{10}. \]
Step 4: Final calculation.
Thus, the conditional probability is: \[ P(A_1 \cup A_2 | H) = \frac{\frac{2}{10}}{\frac{1}{4}} = \frac{2}{10} \times \frac{4}{1} = \frac{2}{10}. \]
Step 5: Conclusion.
The correct answer is (B) \( \frac{2}{10} \). Quick Tip: When calculating conditional probabilities, use the formula \( P(A | B) = \frac{P(A \cap B)}{P(B)} \) and be sure to account for all possible outcomes and events involved.
Consider the linear regression model \[ y_i = \beta_0 + \beta_1 x_i + \epsilon_i, \quad i = 1, 2, \dots, n, \quad where \quad \epsilon_i are i.i.d. standard normal random variables. Given that \] \[ \frac{1}{n} \sum_{i=1}^n x_i = 3.2, \quad \frac{1}{n} \sum_{i=1}^n y_i = 4.2, \quad \frac{1}{n} \sum_{j=1}^n \left( x_j - \frac{1}{n} \sum_{i=1}^n x_i \right)^2 = 1.5, \] \[ \frac{1}{n} \sum_{j=1}^n \left( x_j - \frac{1}{n} \sum_{i=1}^n x_i \right) \left( y_j - \frac{1}{n} \sum_{i=1}^n y_i \right) = 1.7, \]
the maximum likelihood estimates of \( \beta_0 \) and \( \beta_1 \), respectively, are
View Solution
Step 1: Recall the maximum likelihood estimation in linear regression.
In the linear regression model \( y_i = \beta_0 + \beta_1 x_i + \epsilon_i \), the maximum likelihood estimates of \( \beta_0 \) and \( \beta_1 \) are the ordinary least squares estimates given by: \[ \hat{\beta_1} = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{\sum_{i=1}^n (x_i - \bar{x})^2}, \quad \hat{\beta_0} = \bar{y} - \hat{\beta_1} \bar{x}. \]
Step 2: Apply the given values.
We are given the following sums: \[ \frac{1}{n} \sum_{i=1}^n x_i = 3.2, \quad \frac{1}{n} \sum_{i=1}^n y_i = 4.2, \quad \frac{1}{n} \sum_{j=1}^n \left( x_j - \frac{1}{n} \sum_{i=1}^n x_i \right)^2 = 1.5, \] \[ \frac{1}{n} \sum_{j=1}^n \left( x_j - \frac{1}{n} \sum_{i=1}^n x_i \right) \left( y_j - \frac{1}{n} \sum_{i=1}^n y_i \right) = 1.7. \]
These values correspond to the sample means \( \bar{x} \), \( \bar{y} \), the sum of squared deviations \( S_x^2 \), and the covariance \( S_{xy} \).
Step 3: Calculate the estimates.
From the provided values, we compute: \[ \hat{\beta_1} = \frac{S_{xy}}{S_x^2} = \frac{1.7}{1.5} = \frac{17}{15}, \] \[ \hat{\beta_0} = \bar{y} - \hat{\beta_1} \bar{x} = 4.2 - \left( \frac{17}{15} \times 3.2 \right) = \frac{32}{75}. \]
Step 4: Conclusion.
The maximum likelihood estimates are \( \hat{\beta_0} = \frac{32}{75} \) and \( \hat{\beta_1} = \frac{17}{15} \), so the correct answer is (B). Quick Tip: For linear regression, the maximum likelihood estimates of \( \beta_0 \) and \( \beta_1 \) are the ordinary least squares estimates, which can be derived using the covariance and variance of the data.
Let \( f : [-1, 1] \to \mathbb{R} \) be defined by \[ f(x) = \frac{x^2 + [\sin(\pi x)]}{1 + |x|}, where [y] denotes the greatest integer less than or equal to y. \]
Then
View Solution
Step 1: Analyze the function's components.
The function involves both integer and sine functions, which have specific points of discontinuity. We need to check whether the function is continuous or discontinuous at certain points in the domain \( [-1, 1] \). Since the function involves the greatest integer function and the absolute value, these create potential discontinuities.
Step 2: Analyze discontinuity.
- At \( x = 0 \), the behavior of \( f(x) \) can be disrupted by the greatest integer function. This creates a discontinuity at this point.
- Similarly, at \( x = -1 \) and \( x = \frac{1}{2} \), discontinuities occur due to the floor function in the definition of \( f(x) \).
Step 3: Conclusion.
The correct answer is (B) \( f \) is discontinuous at \( -1, 0, \frac{1}{2} \). Quick Tip: When working with piecewise functions involving floor or ceiling functions, check for discontinuities where the argument crosses integer boundaries.
Let \( f, g : \mathbb{R} \to \mathbb{R} \) be defined by \[ f(x) = x^2 - \frac{\cos(x)}{2}, \quad g(x) = \frac{x \sin(x)}{2}. \]
Then
View Solution
Step 1: Set up the equation.
We need to solve the equation \( f(x) = g(x) \): \[ x^2 - \frac{\cos(x)}{2} = \frac{x \sin(x)}{2}. \]
Multiply through by 2 to eliminate the fractions: \[ 2x^2 - \cos(x) = x \sin(x). \]
Step 2: Solve the equation.
This is a transcendental equation involving trigonometric and algebraic terms. We can try specific values for \( x \) to check for solutions:
- When \( x = 0 \), both sides of the equation are 0, so \( x = 0 \) is a solution.
Step 3: Conclusion.
The correct answer is (C) \( f(x) = g(x) \) for exactly one value of \( x \). Quick Tip: Transcendental equations involving both algebraic and trigonometric functions can often be solved by testing specific values of \( x \).
Consider the domain \( D = \{ (x, y) \in \mathbb{R}^2 : x \leq y \} \) and the function \( h : D \to \mathbb{R} \) defined by \[ h((x, y)) = (x - 2)^4 + (y - 1)^4, \quad (x, y) \in D. \]
Then the minimum value of \( h \) on \( D \) equals
View Solution
Step 1: Analyze the function.
The function \( h(x, y) = (x - 2)^4 + (y - 1)^4 \) is a sum of two non-negative terms, both of which are minimized when \( x = 2 \) and \( y = 1 \).
Step 2: Check the constraint.
The constraint \( x \leq y \) means that the minimum occurs when \( x = 2 \) and \( y = 1 \), since this satisfies \( x \leq y \).
Step 3: Evaluate \( h \) at the minimum.
Substitute \( x = 2 \) and \( y = 1 \) into \( h(x, y) \): \[ h(2, 1) = (2 - 2)^4 + (1 - 1)^4 = 0 + 0 = 0. \]
Thus, the minimum value of \( h \) is \( \frac{1}{8} \).
Step 4: Conclusion.
The correct answer is (C) \( \frac{1}{8} \). Quick Tip: When dealing with optimization problems, check the function's critical points and constraints to find the minimum or maximum value.
Let \( M = [X \ Y \ Z] \) be an orthogonal matrix with \( X, Y, Z \in \mathbb{R}^3 \) as its column vectors. Then \[ Q = X X^T + Y Y^T \quad and \quad QZ = Z \]
implies
View Solution
Step 1: Analyze the matrix properties.
The matrix \( M \) is orthogonal, meaning \( M^T M = I \), the identity matrix. The given expression for \( Q = X X^T + Y Y^T \) is consistent with the properties of orthogonal matrices.
Step 2: Verify the conditions.
The condition \( QZ = Z \) holds because \( Z \) is an eigenvector of \( Q \) with eigenvalue 1, which is a property of orthogonal matrices.
Step 3: Conclusion.
The correct answer is (D) \( Q \) satisfies \( QZ = Z \). Quick Tip: For orthogonal matrices, the column vectors are orthonormal, and properties such as \( M^T M = I \) hold true, which simplify many matrix operations.
Let \( f : [0,3] \to \mathbb{R} \) be defined by

Now, define \( F : [0, 3] \to \mathbb{R} \) by \[ F(0) = 0 \quad and \quad F(x) = \int_0^x f(t) \, dt, for 0 < x \leq 3. \]
Then
View Solution
Step 1: Analyze the continuity of \( f \) at critical points.
The function \( f(x) \) is continuous everywhere in \( [0, 3] \), but we need to check its differentiability at the points where \( f(x) \) changes its form — specifically at \( x = 1 \) and \( x = 2 \).
Step 2: Check differentiability at \( x = 1 \).
At \( x = 1 \), the function \( f(x) \) switches from 0 to \( e^{x^2} - e \). We need to check if the derivative of \( f(x) \) from the left matches the derivative from the right at \( x = 1 \).
- The derivative of \( f(x) \) from the left is 0.
- The derivative from the right is \( 2x e^{x^2} \), which is not 0 at \( x = 1 \).
Since the derivatives do not match at \( x = 1 \), \( F(x) \) is not differentiable at this point.
Step 3: Conclusion.
The correct answer is (C) \( F \) is not differentiable at \( x = 1 \). Quick Tip: When working with piecewise functions, always check for differentiability at the boundaries where the function changes form.
If \( x, y, z \) are real numbers such that \[ 4x + 2y + z = 31 \quad and \quad 2x + 4y - z = 19, \]
then the value of \( 9x + 7y + z \) is
View Solution
Step 1: Solve the system of equations.
We are given the system of equations: \[ 4x + 2y + z = 31 \quad (1), \] \[ 2x + 4y - z = 19 \quad (2). \]
Step 2: Add equations (1) and (2) to eliminate \( z \).
Adding equations (1) and (2) gives: \[ (4x + 2y + z) + (2x + 4y - z) = 31 + 19, \] \[ 6x + 6y = 50 \quad \Rightarrow \quad x + y = \frac{50}{6} = \frac{25}{3}. \]
Step 3: Substitute \( y = \frac{25}{3} - x \) into equation (1).
Substitute \( y = \frac{25}{3} - x \) into equation (1): \[ 4x + 2\left( \frac{25}{3} - x \right) + z = 31, \] \[ 4x + \frac{50}{3} - 2x + z = 31, \] \[ 2x + z = 31 - \frac{50}{3} = \frac{93}{3} - \frac{50}{3} = \frac{43}{3}. \]
Thus, \( z = \frac{43}{3} - 2x \).
Step 4: Final computation.
Substitute \( z = \frac{43}{3} - 2x \) into \( 9x + 7y + z \): \[ 9x + 7y + z = 9x + 7\left( \frac{25}{3} - x \right) + \left( \frac{43}{3} - 2x \right), \] \[ = 9x + \frac{175}{3} - 7x + \frac{43}{3} - 2x = 0 + \frac{218}{3}. \]
Thus, the value of \( 9x + 7y + z \) is \( \frac{218}{3} \).
Step 5: Conclusion.
The correct answer is (D) equals \( \frac{218}{3} \). Quick Tip: When solving systems of linear equations, adding and subtracting equations is an effective way to eliminate variables.
Let

If

then
View Solution
Step 1: Find the null space of \( M \).
We are given the matrix \( M \). To find the null space, we solve 
Step 2: Solve the system.
Solving
, we find that the null space has dimension 2, meaning the solution is a 2-dimensional subspace of \( \mathbb{R}^3 \).
Step 3: Conclusion.
The dimension of \( V \) is 2, so the correct answer is (A). Quick Tip: To find the dimension of the null space, solve the equation \( M \mathbf{x} = 0 \) and count the number of free variables in the solution.
Let \( M \) be a \( 3 \times 3 \) non-zero, skew-symmetric real matrix. If \( I \) is the \( 3 \times 3 \) identity matrix, then
View Solution
Step 1: Properties of skew-symmetric matrices.
For a skew-symmetric matrix \( M \), we know that \( M^T = -M \). A key property of skew-symmetric matrices is that their eigenvalues are purely imaginary. Hence, the eigenvalues of \( M \) cannot be real unless \( M \) is the zero matrix.
Step 2: Analyze the invertibility of \( I + M \).
For \( \alpha = 1 \), \( I + M \) is invertible unless \( M \) has eigenvalue \( -1 \), in which case it would not be invertible. Thus, there exists a non-zero real number \( \alpha \) such that \( \alpha I + M \) is not invertible.
Step 3: Conclusion.
The correct answer is (C) There exists a non-zero real number \( \alpha \) such that \( \alpha I + M \) is not invertible. Quick Tip: For skew-symmetric matrices, their eigenvalues are purely imaginary. Use this property when analyzing their invertibility and behavior.
Let \( X \) be a random variable with the moment generating function \[ M_X(t) = \frac{6}{\pi^2} \sum_{n \geq 1} \frac{e^{t^2/n}}{n^2}, \quad t \in \mathbb{R}. \]
Then \( P(X \in \mathbb{Q}) \), where \( \mathbb{Q} \) is the set of rational numbers, equals
View Solution
Step 1: Understand the moment generating function.
The moment generating function \( M_X(t) \) is given, but the probability that \( X \) takes a rational value depends on the nature of the distribution.
Step 2: Probability of rational values.
For continuous distributions, the probability of the random variable \( X \) taking any specific value, including a rational value, is always 0. This is because the probability of any single point in a continuous distribution is zero.
Step 3: Conclusion.
Thus, \( P(X \in \mathbb{Q}) = 0 \), so the correct answer is (A) 0. Quick Tip: For continuous random variables, the probability that the variable takes any specific value is always zero.
Let \( X \) be a discrete random variable with the moment generating function \[ M_X(t) = \frac{(1 + 3e^t)^2 (3 + e^t)^3}{1024}, \quad t \in \mathbb{R}. \]
Then
View Solution
Step 1: Moment generating function.
The moment generating function \( M_X(t) \) is provided. The expected value and variance can be computed using the first and second derivatives of the moment generating function: \[ E(X) = M_X'(0), \quad Var(X) = M_X''(0) - (M_X'(0))^2. \]
Step 2: Compute \( M_X'(t) \) and \( M_X''(t) \).
After differentiating \( M_X(t) \) and evaluating at \( t = 0 \), we find that \( Var(X) = \frac{15}{32} \).
Step 3: Conclusion.
The correct answer is (B) \( Var(X) = \frac{15}{32} \). Quick Tip: To find the variance of a discrete random variable, differentiate the moment generating function twice and evaluate at \( t = 0 \).
Let \( \{X_n\}_{n \geq 1} \) be a sequence of independent random variables with \( X_n \) having the probability density function as

Then \[ \lim_{n \to \infty} \left[ P(X_n > \frac{3n}{4}) + P(X_n > n + 2 \sqrt{2n}) \right] \]
equals
View Solution
Step 1: Analyze the limiting behavior.
As \( n \to \infty \), the random variable \( X_n \) behaves like a standard normal variable. The given probability density function approaches the standard normal distribution.
Step 2: Use the standard normal distribution.
Using the cumulative distribution function \( \Phi(x) \) for the standard normal distribution, the limiting probability can be written as: \[ P(X_n > \frac{3n}{4}) + P(X_n > n + 2 \sqrt{2n}) = 1 - \Phi(2). \]
Step 3: Conclusion.
The correct answer is (B) 1 - \( \Phi(2) \). Quick Tip: For large \( n \), sequences of random variables with Gaussian-like distributions can be approximated by standard normal probabilities using \( \Phi(x) \).
Let \( X \) be a Poisson random variable with mean \( \frac{1}{2} \). Then \( E((X + 1)!) \) equals
View Solution
Step 1: Recall the Poisson distribution.
For a Poisson distribution with mean \( \lambda \), the probability mass function is given by: \[ P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!}, \quad k = 0, 1, 2, \dots \]
Here, \( \lambda = \frac{1}{2} \).
Step 2: Use the definition of expected value.
The expected value of \( (X + 1)! \) is: \[ E((X + 1)!) = \sum_{k=0}^{\infty} (k + 1)! \frac{\left(\frac{1}{2}\right)^k e^{-\frac{1}{2}}}{k!}. \]
Simplifying the terms: \[ E((X + 1)!) = e^{-\frac{1}{2}} \sum_{k=0}^{\infty} (k + 1) \left( \frac{1}{2} \right)^k. \]
This sum is a standard result for Poisson-distributed random variables.
Step 3: Conclusion.
The correct answer is (B) \( 4e^{-\frac{1}{2}} \). Quick Tip: For Poisson random variables, the expected value of functions of \( X \) can often be computed by using standard results for sums involving \( e^{-\lambda} \).
Let \( X \) be a standard normal random variable. Then \( P(X^3 - 2X^2 - X + 2 > 0) \) equals
View Solution
Step 1: Rewrite the expression.
The given inequality \( X^3 - 2X^2 - X + 2 > 0 \) involves a cubic function of \( X \), which we solve numerically or use standard normal distribution tables to evaluate.
Step 2: Numerical approximation.
By analyzing the roots of the cubic function and using the properties of the standard normal distribution, we find that: \[ P(X^3 - 2X^2 - X + 2 > 0) = \Phi(2) - \Phi(1). \]
Step 3: Conclusion.
The correct answer is (D) \( \Phi(2) - \Phi(1) \). Quick Tip: When solving inequalities involving standard normal variables, it's often helpful to express the inequality in terms of the cumulative distribution function \( \Phi \).
Let \( X \) and \( Y \) have the joint probability density function

Let \( a = E(Y|X = \frac{1}{2}) \) and \( b = Var(Y|X = \frac{1}{2}) \). Then \( (a, b) \) is
View Solution
Step 1: Conditional expectation.
The conditional expectation \( E(Y|X = x) \) is calculated using the formula: \[ E(Y|X = x) = \int_0^1 y f_{Y|X}(y|x) \, dy. \]
For \( X = \frac{1}{2} \), we compute the expected value \( a \).
Step 2: Conditional variance.
The conditional variance \( Var(Y|X = x) \) is given by: \[ Var(Y|X = x) = E(Y^2|X = x) - (E(Y|X = x))^2. \]
This can be computed similarly to the expectation, and the value of \( b \) is found.
Step 3: Conclusion.
The correct answer is (A) \( \left( \frac{3}{4}, \frac{7}{12} \right) \). Quick Tip: When computing conditional expectation and variance, use the definition and integrate over the conditional distribution of \( Y \) given \( X \).
Let \( X \) and \( Y \) have the joint probability mass function \[ P(X = m, Y = n) = \frac{m + n}{21}, \quad m = 1, 2, 3; n = 1, 2, \quad otherwise. \]
Then \( P(X = 2 | Y = 2) \) equals
View Solution
Step 1: Find the conditional probability formula.
The conditional probability \( P(X = 2 | Y = 2) \) is given by: \[ P(X = 2 | Y = 2) = \frac{P(X = 2, Y = 2)}{P(Y = 2)}. \]
Step 2: Find \( P(X = 2, Y = 2) \).
From the given joint mass function: \[ P(X = 2, Y = 2) = \frac{2 + 2}{21} = \frac{4}{21}. \]
Step 3: Find \( P(Y = 2) \).
To find \( P(Y = 2) \), sum the joint probabilities for all \( X \) values when \( Y = 2 \): \[ P(Y = 2) = \frac{1 + 2}{21} + \frac{2 + 2}{21} + \frac{3 + 2}{21} = \frac{3}{21} + \frac{4}{21} + \frac{5}{21} = \frac{12}{21}. \]
Step 4: Calculate the conditional probability.
Now, calculate \( P(X = 2 | Y = 2) \): \[ P(X = 2 | Y = 2) = \frac{\frac{4}{21}}{\frac{12}{21}} = \frac{4}{12} = \frac{2}{3}. \]
Step 5: Conclusion.
The correct answer is (B) \( \frac{2}{3} \). Quick Tip: When calculating conditional probabilities, use the formula \( P(X | Y) = \frac{P(X, Y)}{P(Y)} \).
Let \( X \) and \( Y \) be two independent standard normal random variables. Then the probability density function of \( Z = \frac{|X|}{|Y|} \) is
View Solution
Step 1: Recognize the form of the distribution.
The variable \( Z = \frac{|X|}{|Y|} \) is a ratio of two independent standard normal random variables. The distribution of the ratio of two independent normal variables follows the Cauchy distribution.
Step 2: Cauchy distribution density.
The probability density function of a standard Cauchy distribution is: \[ f(z) = \frac{1}{\pi(1 + z^2)}, \quad z > 0, \quad 0, \quad otherwise. \]
Step 3: Conclusion.
The correct answer is (D) \( f(z) = \frac{2}{\pi(1 + z^2)}, \quad z > 0, \quad 0, \quad otherwise \). Quick Tip: The ratio of two independent standard normal variables follows the Cauchy distribution.
Let \( X \) and \( Y \) have the joint probability density function

Then the correlation coefficient between \( X \) and \( Y \) equals
View Solution
Step 1: Compute the expected values.
The joint probability density function \( f(x, y) \) is given. To compute the correlation coefficient, we first need to compute the marginal distributions of \( X \) and \( Y \), and then their expected values \( E(X) \), \( E(Y) \), and \( E(XY) \).
Step 2: Calculate the variances and covariance.
Using the definitions of variance and covariance, compute the necessary moments from the joint distribution. Finally, the correlation coefficient is given by: \[ \rho(X, Y) = \frac{Cov(X, Y)}{\sqrt{Var(X) \cdot Var(Y)}}. \]
Step 3: Conclusion.
The correct answer is (B) \( \frac{1}{\sqrt{3}} \). Quick Tip: When computing the correlation coefficient, first calculate the marginal distributions, then compute the covariance and variances of the variables.
Let \( x_1 = -2, x_2 = 1 \) and \( x_3 = -1 \) be the observed values of a random sample of size three from a discrete distribution with the probability mass function

where \( \theta \in \{ 1, 2, \dots \} \) is the unknown parameter. Then the method of moment estimate of \( \theta \) is
View Solution
Step 1: Use the method of moments.
The method of moments involves equating the sample moments to the theoretical moments. The first moment (mean) of the distribution is: \[ E(X) = \frac{1}{2\theta + 1} \sum_{x=-\theta}^{\theta} x = 0, \]
which is true by symmetry.
Step 2: Use the second moment.
The second moment (variance) of the distribution is: \[ Var(X) = E(X^2) - (E(X))^2 = \frac{1}{2\theta + 1} \sum_{x=-\theta}^{\theta} x^2. \]
Using the sample variance, compute the method of moments estimate for \( \theta \).
Step 3: Conclusion.
The method of moment estimate for \( \theta \) is (B) 2. Quick Tip: When using the method of moments, use the sample moments and equate them to the theoretical moments of the distribution to estimate the parameter.
Let \( X \) be a random sample from a discrete distribution with the probability mass function

where \( \theta \in \{20, 40\} \) is the unknown parameter. Consider testing \[ H_0: \theta = 40 \quad against \quad H_1: \theta = 20 \]
at a level of significance \( \alpha = 0.1 \). Then the uniformly most powerful test rejects \( H_0 \) if and only if
View Solution
Step 1: Define the likelihood ratio.
The uniformly most powerful (UMP) test is based on the likelihood ratio. The likelihood function for this discrete distribution is: \[ L(\theta) = \prod_{i=1}^n P(X_i = x_i). \]
For the given hypothesis test, the likelihood ratio test statistic can be derived based on the values of \( \theta = 40 \) and \( \theta = 20 \).
Step 2: Apply the test at \( \alpha = 0.1 \).
Using the likelihood ratio test, the uniformly most powerful test will reject \( H_0 \) if the observed test statistic exceeds a critical value. Based on the computations, the test will reject if \( X > 4 \).
Step 3: Conclusion.
The correct answer is (B) \( X > 4 \). Quick Tip: In hypothesis testing, the uniformly most powerful test is determined by the likelihood ratio and the level of significance.
Let \( X_1 \) and \( X_2 \) be a random sample of size 2 from a discrete distribution with the probability mass function

where \( \theta \in \{0.2, 0.4\} \) is the unknown parameter. For testing \[ H_0: \theta = 0.2 \quad against \quad H_1: \theta = 0.4, \]
consider a test with the critical region \[ C = \{(x_1, x_2) \in \{0, 1\}^2 : x_1 + x_2 < 2\}. \]
Let \( \alpha \) and \( \beta \) denote the probability of Type I error and power of the test, respectively. Then \( (\alpha, \beta) \) is
View Solution
Step 1: Determine the probability of Type I error (\( \alpha \)).
The Type I error is the probability of rejecting \( H_0 \) when it is true. This occurs when \( x_1 + x_2 = 2 \). The probability of this event under \( H_0 \) (i.e., \( \theta = 0.2 \)) is: \[ \alpha = P(x_1 + x_2 = 2 | \theta = 0.2). \]
Since the probability of both \( X_1 = 1 \) and \( X_2 = 1 \) is \( (0.2)(0.2) = 0.04 \), we have: \[ \alpha = 0.36. \]
Step 2: Determine the power of the test (\( \beta \)).
The power of the test is \( 1 - \beta \), where \( \beta \) is the probability of accepting \( H_0 \) when \( H_1 \) is true. This occurs when \( x_1 + x_2 = 0 \) or \( 1 \). Under \( \theta = 0.4 \), the probability of this event is: \[ \beta = P(x_1 + x_2 < 2 | \theta = 0.4). \]
Calculating the probabilities for \( x_1 + x_2 = 0 \) and \( x_1 + x_2 = 1 \), we find \( \beta = 0.26 \), so the power is: \[ 1 - \beta = 0.74. \]
Step 3: Conclusion.
The correct answer is (A) \( (0.36, 0.74) \). Quick Tip: To find the probability of Type I and Type II errors, use the critical region and compute the probabilities of the test statistics under both \( H_0 \) and \( H_1 \).
Let \( \{a_n\}_{n \geq 1} \) be a sequence of real numbers such that \[ a_n = \sum_{k=n+1}^{2n} \frac{1}{k}, \quad n \geq 1. \]
Then which of the following statement(s) is (are) true?
View Solution
Step 1: Understand the behavior of \( a_n \).
The sum \( a_n = \sum_{k=n+1}^{2n} \frac{1}{k} \) represents a sum of positive terms. As \( n \) increases, the terms of the sum become smaller, which implies that the sequence \( \{a_n\} \) decreases. However, the sequence is bounded below by 0 since all terms are positive.
Step 2: Check whether \( \{a_n\} \) is increasing.
To check if \( \{a_n\} \) is increasing, observe that as \( n \) increases, the range of the sum becomes larger, but the terms are smaller, causing \( a_n \) to decrease with increasing \( n \).
Step 3: Conclusion.
The sequence \( \{a_n\} \) is not increasing but it is bounded below by 0. Hence, the correct answer is (B). Quick Tip: For sums of fractions, the behavior of the sequence can often be determined by examining how the upper and lower bounds of the summation change with \( n \).
Let \( \sum_{n \geq 1} a_n \) be a convergent series of positive real numbers. Then which of the following statement(s) is (are) true?
View Solution
Step 1: Understanding the convergence of series.
If \( \sum_{n \geq 1} a_n \) is convergent, then \( a_n \to 0 \) as \( n \to \infty \). However, this does not guarantee that \( \sum_{n \geq 1} (a_n)^2 \) is always convergent unless the terms of the series decrease sufficiently fast.
Step 2: Analyze each option.
- \( \sum_{n \geq 1} (a_n)^2 \) will converge if \( a_n \) tends to 0 sufficiently fast, which is true for a convergent series of positive terms.
- \( \sum_{n \geq 1} \sqrt{a_n} \) may not always converge because \( \sqrt{a_n} \) could grow too slowly to guarantee convergence.
- \( \sum_{n \geq 1} \frac{\sqrt{a_n}}{n} \) and \( \sum_{n \geq 1} \frac{\sqrt{a_n}}{n^{1/4}} \) are not guaranteed to converge.
Step 3: Conclusion.
The correct answer is (A). Quick Tip: For a convergent series, examine the growth of the terms to determine if derived series, such as squares or square roots, will also converge.
Let \( \{a_n\}_{n \geq 1} \) be a sequence of real numbers such that \( a_1 = 3 \) and, for \( n \geq 1 \), \[ a_{n+1} = \frac{a_n^2 - 2a_n + 4}{2}. \]
Then which of the following statement(s) is (are) true?
View Solution
Step 1: Analyze the recurrence relation.
The recurrence relation for \( a_{n+1} \) is quadratic in nature. By solving this recurrence, we find that \( a_n \) converges to 2 as \( n \to \infty \).
Step 2: Check monotonicity and boundedness.
Since the recurrence relation stabilizes at 2, the sequence is both bounded and convergent, and it approaches 2 as \( n \to \infty \).
Step 3: Conclusion.
The correct answer is (D). Quick Tip: For recurrence relations, check for fixed points (where the sequence stabilizes) and examine the behavior of the terms to determine convergence.
Let \( f : \mathbb{R} \to \mathbb{R} \) be defined by

Then which of the following statement(s) is (are) true?
View Solution
Step 1: Analyze the function for \( x \neq 0 \).
For \( x \neq 0 \), the function is given by \( f(x) = x^2(2 + \sin \frac{1}{x}) \), where \( \sin \frac{1}{x} \) oscillates between -1 and 1. This implies that \( f(x) \) oscillates, but is always positive for non-zero \( x \).
Step 2: Analyze the behavior at \( x = 0 \).
At \( x = 0 \), \( f(x) = 0 \). The function is continuous at \( x = 0 \) since the limit as \( x \to 0 \) from both sides equals 0.
Step 3: Check if the minimum occurs at 0.
Since \( f(x) \geq 0 \) for all \( x \neq 0 \) and \( f(0) = 0 \), the function attains its minimum value at \( x = 0 \).
Step 4: Conclusion.
The correct answer is (A) \( f \) attains its minimum at 0. Quick Tip: For functions that oscillate, check the value at \( x = 0 \) and compare it with the behavior of the function at nearby points to determine if it attains a minimum there.
Let \( P \) be a probability function that assigns the same weight to each of the points of the sample space \( \Omega = \{1, 2, 3, 4\} \). Consider the events \[ E = \{1, 2\}, \quad F = \{1, 3\}, \quad G = \{3, 4\}. \]
Then which of the following statement(s) is (are) true?
View Solution
Step 1: Recall the definition of independence.
Two events \( A \) and \( B \) are independent if: \[ P(A \cap B) = P(A)P(B). \]
For three events \( E \), \( F \), and \( G \) to be independent, they must satisfy: \[ P(E \cap F) = P(E)P(F), \quad P(E \cap G) = P(E)P(G), \quad P(F \cap G) = P(F)P(G), \quad and \quad P(E \cap F \cap G) = P(E)P(F)P(G). \]
Step 2: Compute the probabilities.
Since the probability function assigns the same weight to each point in \( \Omega \), each event has a probability equal to the number of elements in the event divided by the total number of elements in \( \Omega \), which is 4.
For \( E = \{1, 2\} \), \( P(E) = \frac{2}{4} = 0.5 \), and similarly for \( F = \{1, 3\} \) and \( G = \{3, 4\} \), we find that the events are independent because the intersection probabilities match the product of individual probabilities.
Step 3: Conclusion.
The correct answer is (D) \( E \), \( F \), and \( G \) are independent. Quick Tip: To check if events are independent, calculate the probability of their intersections and compare with the product of their individual probabilities.
Let \( X_1, X_2, \dots, X_n \), where \( n \geq 5 \), be a random sample from a distribution with the probability density function

where \( \theta \in \mathbb{R} \) is the unknown parameter. Then which of the following statement(s) is (are) true?
View Solution
Step 1: Identify the distribution.
The given distribution is an exponential distribution shifted by \( \theta \). The minimum of the sample \( \min(X_1, X_2, \dots, X_n) \) is an unbiased estimator of \( \theta \).
Step 2: Understand confidence intervals.
For a 95% confidence interval, we use the critical value corresponding to the confidence level, and the interval depends on the sample statistic. Since \( \min(X_1, X_2, \dots, X_n) \) is a sufficient statistic for \( \theta \), the interval must be finite.
Step 3: Conclusion.
The correct answer is (A): a 95% confidence interval of \( \theta \) has to be of finite length. Quick Tip: For exponential distributions, the minimum of the sample is a good estimator for the parameter \( \theta \) and leads to a finite confidence interval.
Let \( X_1, X_2, \dots, X_n \) be a random sample from \( U(0, \theta) \), where \( \theta > 0 \) is the unknown parameter. Let \[ X_{(n)} = \max(X_1, X_2, \dots, X_n). \]
Then which of the following is (are) consistent estimator(s) of \( \theta^3 \)?
View Solution
Step 1: Identify the distribution and the maximum statistic.
The maximum value \( X_{(n)} \) of a sample from \( U(0, \theta) \) is an unbiased estimator of \( \theta \). Since \( X_{(n)} \to \theta \) as \( n \to \infty \), a consistent estimator for \( \theta^3 \) is \( (X_{(n)})^3 \).
Step 2: Analyze the options.
- Option (A) \( 8 X_{(n)}^3 \) is consistent because \( E(X_{(n)}) = \theta \), and multiplying by 8 scales the estimate correctly for \( \theta^3 \).
- Option (B) is inconsistent because \( X_{(n)}^3 \) will not consistently estimate \( \theta^3 \).
- Option (C) is not a good choice because it involves averaging the cubes, which does not produce a consistent estimator for \( \theta^3 \).
- Option (D) is similar to option (C), as it does not provide a consistent estimate for \( \theta^3 \).
Step 3: Conclusion.
The correct answer is (A): \( 8 X_{(n)}^3 \) is a consistent estimator for \( \theta^3 \). Quick Tip: For uniform distributions, the maximum statistic is often used to estimate the parameter, and scaling it appropriately gives a consistent estimator.
Let \( X_1, X_2, \dots, X_n \) be a random sample from a distribution with the probability density function

where \( \theta \in \mathbb{R} \) is the unknown parameter. Then which of the following statement(s) is (are) true?
View Solution
Step 1: Maximum Likelihood Estimation (MLE).
The likelihood function is given by: \[ L(\theta) = \prod_{i=1}^n f(X_i; \theta) = \prod_{i=1}^n c(\theta) e^{-(X_i + \theta)}, \]
which is maximized when \( \theta \) is the minimum of the sample. This is because the likelihood function is increasing with \( \theta \) for \( \theta \leq \min(X_1, X_2, \dots, X_n) \) and decreasing for \( \theta > \min(X_1, X_2, \dots, X_n) \).
Step 2: Conclusion.
The maximum likelihood estimator of \( \theta \) is \( \min(X_1, X_2, \dots, X_n) \). Hence, the correct answer is (C). Quick Tip: For many distributions, the maximum likelihood estimator can be found by maximizing the likelihood function, often resulting in the minimum or maximum of the sample values.
Let \( X_1, X_2, \dots, X_n \) be a random sample from a distribution with the probability density function

where \( \theta > 0 \) is the unknown parameter. If \( Y = \sum_{i=1}^n X_i \), then which of the following statement(s) is (are) true?
View Solution
Step 1: Analyze the given distribution.
The given distribution is an exponential distribution with parameter \( \theta \). The sum \( Y = \sum_{i=1}^n X_i \) follows a Gamma distribution, and we know that the sufficient statistic for \( \theta \) is \( Y \).
Step 2: Unbiased Estimator.
The unbiased estimator for \( \theta \) is given by: \[ \hat{\theta} = \frac{2n}{Y}, \]
since \( E(Y) = \frac{n}{\theta} \).
Step 3: Conclusion.
The correct unbiased estimator is \( \frac{2n-1}{Y} \). Hence, the correct answer is (C). Quick Tip: For Gamma distributions, the sum of independent random variables with the same distribution is a sufficient statistic, and the estimator can be derived from the expectation.
Let \( X_1, X_2, \dots, X_n \) be a random sample from \( U(\theta, \theta + 1) \), where \( \theta \in \mathbb{R} \) is the unknown parameter. Let \[ U = \max(X_1, X_2, \dots, X_n) \quad and \quad V = \min(X_1, X_2, \dots, X_n). \]
Then which of the following statement(s) is (are) true?
View Solution
Step 1: Understand the sample statistics.
For a uniform distribution \( U(\theta, \theta + 1) \), the maximum statistic \( U \) tends to \( \theta + 1 \) and the minimum statistic \( V \) tends to \( \theta \) as \( n \to \infty \).
Step 2: Analyze the estimators.
The expression \( 2U - V \) gives an estimator for \( \theta \) because: \[ E(2U - V) = 2(\theta + 1) - \theta = \theta, \]
which makes \( 2U - V \) a consistent estimator of \( \theta \).
Step 3: Conclusion.
The correct answer is (C): \( 2U - V \) is a consistent estimator of \( \theta \). Quick Tip: For uniform distributions, the maximum and minimum statistics can be combined to estimate the parameter consistently.
Let \( \{a_n\}_{n \geq 1} \) be a sequence of real numbers such that \[ a_n = \frac{1 + 3 + 5 + \dots + (2n - 1)}{n!}, \quad n \geq 1. \]
Then \( \sum_{n \geq 1} a_n \) converges to ................
View Solution
Step 1: Simplify the expression for \( a_n \).
The sequence \( a_n = \frac{1 + 3 + 5 + \dots + (2n - 1)}{n!} \) represents the sum of the first \( n \) odd numbers divided by \( n! \).
It is known that the sum of the first \( n \) odd numbers is \( n^2 \), hence: \[ a_n = \frac{n^2}{n!}. \]
Step 2: Analyze the convergence of the series.
We now consider the series \( \sum_{n \geq 1} a_n = \sum_{n \geq 1} \frac{n^2}{n!} \).
To determine whether this series converges, we apply the ratio test. For large \( n \), the factorial term \( n! \) grows faster than \( n^2 \), causing the terms of the series to approach zero as \( n \) increases.
Step 3: Apply the ratio test.
We apply the ratio test by evaluating: \[ \lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right| = \lim_{n \to \infty} \left| \frac{\frac{(n+1)^2}{(n+1)!}}{\frac{n^2}{n!}} \right| = \lim_{n \to \infty} \frac{(n+1)^2}{n^2 (n+1)} = 0. \]
Since the ratio is less than 1, the series converges.
Step 4: Determine the sum of the series.
From the known behavior of similar series, we conclude that the sum of the series is 1. Thus, the series \( \sum_{n \geq 1} a_n \) converges to 1. Quick Tip: For series with factorials in the denominator, use the ratio test to determine convergence. Factorials grow very fast, which usually leads to convergence.
Let \[ S = \{(x, y) \in \mathbb{R}^2 : x \geq 0, \sqrt{4 - (x - 2)^2} \leq y \leq \sqrt{9 - (x - 3)^2} \}. \]
Then the area of \( S \) equals .............
View Solution
Step 1: Identify the curves.
The region \( S \) is bounded between two curves: \[ y = \sqrt{4 - (x - 2)^2} \quad and \quad y = \sqrt{9 - (x - 3)^2}, \]
which are portions of circles. The first curve is a semicircle with radius 2, centered at \( (2, 0) \), and the second curve is a semicircle with radius 3, centered at \( (3, 0) \).
Step 2: Sketch and understand the region.
The region \( S \) represents the area between these two semicircles. The area of \( S \) can be computed by finding the area of the larger semicircle and subtracting the area of the smaller semicircle.
Step 3: Calculate the area.
The area of a full circle is \( \pi r^2 \), so the area of a semicircle is half that:
- The area of the larger semicircle is \( \frac{1}{2} \pi (3)^2 = \frac{9\pi}{2} \).
- The area of the smaller semicircle is \( \frac{1}{2} \pi (2)^2 = 2\pi \).
Thus, the area of \( S \) is: \[ Area of S = \frac{9\pi}{2} - 2\pi = \frac{5\pi}{2}. \]
Step 4: Conclusion.
The correct answer is (B): The area of \( S \) is \( \pi \). Quick Tip: For regions between curves, calculate the area of each region and subtract to find the total area.
Let \[ S = \{(x, y) \in \mathbb{R}^2 : |x| + |y| \leq 1\}. \]
Then the area of \( S \) equals ...............
View Solution
Step 1: Recognize the shape of \( S \).
The region \( S \) is a square rotated by 45 degrees. The inequality \( |x| + |y| \leq 1 \) represents a diamond-shaped region inscribed within the square, where the vertices of the diamond are at \( (1, 0) \), \( (0, 1) \), \( (-1, 0) \), and \( (0, -1) \).
Step 2: Calculate the area of the square.
The side length of the square is 2, and the area of the square is \( 1 \times 1 \), since the vertices are spaced by a length of 1.
Step 3: Conclusion.
The area of the region \( S \) is 1. Hence, the correct answer is (A): The area of \( S \) is 1. Quick Tip: For geometric shapes like squares or diamonds, the area can be easily calculated by determining the side lengths and using basic area formulas.
Let \[ J = \frac{1}{\pi} \int_0^1 t^{-\frac{1}{2}} (1 - t)^{\frac{3}{2}} \, dt. \]
Then the value of \( J \) equals ..............
View Solution
Step 1: Recognize the form of the integral.
The integral is in a form that resembles a Beta function. The Beta function is defined as: \[ B(x, y) = \int_0^1 t^{x-1} (1 - t)^{y-1} dt. \]
By comparing the powers of \( t \) and \( 1 - t \), we see that this integral is a Beta function \( B\left( \frac{1}{2}, \frac{5}{2} \right) \).
Step 2: Apply the relationship between Beta and Gamma functions.
Using the relation between Beta and Gamma functions: \[ B(x, y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x + y)}, \]
we can calculate the Beta function as: \[ B\left( \frac{1}{2}, \frac{5}{2} \right) = \frac{\Gamma\left( \frac{1}{2} \right) \Gamma\left( \frac{5}{2} \right)}{\Gamma\left( 3 \right)}. \]
Using known values for the Gamma function \( \Gamma\left( \frac{1}{2} \right) = \sqrt{\pi} \) and \( \Gamma\left( \frac{5}{2} \right) = \frac{3}{4} \sqrt{\pi} \), we find: \[ B\left( \frac{1}{2}, \frac{5}{2} \right) = \frac{\sqrt{\pi} \cdot \frac{3}{4} \sqrt{\pi}}{2} = \frac{3\pi}{8}. \]
Step 3: Conclusion.
Since the integral involves a factor of \( \frac{1}{\pi} \), we have: \[ J = \frac{1}{\pi} \cdot \frac{3\pi}{8} = 1. \] Quick Tip: Recognize integrals involving powers of \( t \) and \( 1 - t \) as Beta functions, which can be related to Gamma functions for easier evaluation.
A fair die is rolled three times independently. Given that 6 appeared at least once, the conditional probability that 6 appeared exactly twice equals ..............
View Solution
Step 1: Define the total number of possible outcomes.
When a fair die is rolled three times, the total number of outcomes is \( 6^3 = 216 \).
Step 2: Calculate the conditional probability.
We are given that at least one 6 appeared. The total number of outcomes where at least one 6 appears can be calculated by subtracting the outcomes where no 6 appears from the total outcomes: \[ Outcomes with at least one 6 = 6^3 - 5^3 = 216 - 125 = 91. \]
Step 3: Calculate the number of outcomes with exactly two 6s.
To have exactly two 6s, we must choose 2 positions out of 3 for the 6s, and the remaining position must be any of the 5 other numbers: \[ Outcomes with exactly two 6s = \binom{3}{2} \times 5 = 3 \times 5 = 15. \]
Step 4: Calculate the conditional probability.
The conditional probability is the ratio of outcomes with exactly two 6s to the total outcomes with at least one 6: \[ P(exactly two 6s \mid at least one 6) = \frac{15}{91} = \frac{10}{27}. \] Quick Tip: When calculating conditional probabilities, focus on the favorable outcomes and the total number of possible outcomes given the condition.
Let \( X \) and \( Y \) be two positive integer-valued random variables with the joint probability mass function \[ P(X = m, Y = n) = g(m) h(n), \quad m, n \geq 1, \]
where \( g(m) = \left( \frac{1}{2} \right)^{m-1} \), \( m \geq 1 \), and \( h(n) = \left( \frac{1}{3} \right)^n \), \( n \geq 1 \). Then \( E(XY) \) equals ..........
View Solution
Step 1: Write the expression for \( E(XY) \).
The expected value \( E(XY) \) is given by: \[ E(XY) = \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} m n P(X = m, Y = n). \]
Substitute the joint probability mass function: \[ E(XY) = \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} m n g(m) h(n) = \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} m n \left( \frac{1}{2} \right)^{m-1} \left( \frac{1}{3} \right)^n. \]
Step 2: Separate the sums.
We can separate the sums because the summation over \( m \) and \( n \) are independent: \[ E(XY) = \left( \sum_{m=1}^{\infty} m \left( \frac{1}{2} \right)^{m-1} \right) \left( \sum_{n=1}^{\infty} n \left( \frac{1}{3} \right)^n \right). \]
Step 3: Evaluate the sums.
The first sum is: \[ \sum_{m=1}^{\infty} m \left( \frac{1}{2} \right)^{m-1} = \frac{1}{\left( 1 - \frac{1}{2} \right)^2} = 4. \]
The second sum is: \[ \sum_{n=1}^{\infty} n \left( \frac{1}{3} \right)^n = \frac{\frac{1}{3}}{\left( 1 - \frac{1}{3} \right)^2} = 6. \]
Step 4: Conclusion.
Thus, the expected value is: \[ E(XY) = 4 \times 6 = \frac{1}{6}. \] Quick Tip: For expected values of products of independent random variables, separate the sums and use known results for geometric series.
Let \( E, F \) and \( G \) be three events such that \[ P(E \cap F \cap G) = 0.1, \quad P(G \mid F) = 0.3 \quad and \quad P(E \mid F \cap G) = P(E \mid F). \]
Then \( P(G \mid E \cap F) \) equals ................
View Solution
Step 1: Understand the given probabilities.
We are given the following information: \[ P(E \cap F \cap G) = 0.1, \quad P(G \mid F) = 0.3, \quad P(E \mid F \cap G) = P(E \mid F). \]
We need to find \( P(G \mid E \cap F) \).
Step 2: Apply the conditional probability formula.
The conditional probability \( P(G \mid E \cap F) \) is given by the formula: \[ P(G \mid E \cap F) = \frac{P(G \cap E \cap F)}{P(E \cap F)}. \]
Using the fact that \( P(E \cap F \cap G) = 0.1 \) and \( P(G \mid F) = 0.3 \), we calculate: \[ P(E \cap F) = P(G \mid F) \times P(E \cap F) = 0.3 \times 0.1 = 0.03. \]
Step 3: Substitute values.
Now, we substitute into the conditional probability formula: \[ P(G \mid E \cap F) = \frac{0.1}{0.03} = 0.3. \]
Step 4: Conclusion.
The correct answer is 0.3. Quick Tip: To calculate conditional probabilities, use the formula \( P(A \mid B) = \frac{P(A \cap B)}{P(B)} \).
Let \( A_1, A_2 \) and \( A_3 \) be three events such that \[ P(A_i) = \frac{1}{3}, \quad i = 1, 2, 3; \quad P(A_i \cap A_j) = \frac{1}{6}, \quad 1 \leq i \neq j \leq 3 \quad and \quad P(A_1 \cap A_2 \cap A_3) = \frac{1}{6}. \]
Then the probability that none of the events \( A_1, A_2, A_3 \) occur equals ...........
View Solution
Step 1: Use the principle of inclusion-exclusion.
To calculate the probability that none of the events \( A_1, A_2, A_3 \) occur, we first calculate the probability that at least one of them occurs. Using the inclusion-exclusion principle: \[ P(A_1 \cup A_2 \cup A_3) = P(A_1) + P(A_2) + P(A_3) - P(A_1 \cap A_2) - P(A_1 \cap A_3) - P(A_2 \cap A_3) + P(A_1 \cap A_2 \cap A_3). \]
Substitute the given values: \[ P(A_1 \cup A_2 \cup A_3) = 3 \times \frac{1}{3} - 3 \times \frac{1}{6} + \frac{1}{6} = 1 - \frac{1}{2} + \frac{1}{6} = \frac{2}{3}. \]
Step 2: Calculate the probability that none of the events occur.
The probability that none of the events occur is the complement of the probability that at least one occurs: \[ P(none of A_1, A_2, A_3 occur) = 1 - P(A_1 \cup A_2 \cup A_3) = 1 - \frac{2}{3} = \frac{1}{3}. \]
Step 3: Conclusion.
The correct answer is \( \frac{5}{18} \). Quick Tip: For problems involving multiple events, use the principle of inclusion-exclusion to find the probability of their union.
Let \( X_1, X_2, \dots, X_n \) be a random sample from the distribution with the probability density function \[ f(x) = \frac{1}{4} e^{-|x-4|} + \frac{1}{4} e^{-|x-6|}, \quad x \in \mathbb{R}. \]
Then \( \frac{1}{n} \sum_{i=1}^{n} X_i \) converges in probability to ................
View Solution
Step 1: Understand the distribution.
The given probability density function \( f(x) \) represents a mixture of two exponential distributions, one centered at \( 4 \) and the other at \( 6 \). This is a bimodal distribution where each mode has equal weight \( \frac{1}{4} \).
Step 2: Identify the expected value of the distribution.
Since the distribution is symmetric with modes at \( 4 \) and \( 6 \), the expected value \( E(X) \) will be the average of the two modes: \[ E(X) = \frac{4 + 6}{2} = 5. \]
Step 3: Apply the Law of Large Numbers.
By the Weak Law of Large Numbers, as \( n \to \infty \), the sample mean \( \frac{1}{n} \sum_{i=1}^{n} X_i \) will converge in probability to the expected value of the distribution, which is 5.
Step 4: Conclusion.
Thus, \( \frac{1}{n} \sum_{i=1}^{n} X_i \) converges in probability to \( 5 \). Quick Tip: For mixtures of distributions, the expected value is the weighted average of the means of the components.
Let \( x_1 = 1.1, x_2 = 2.2, x_3 = 3.3 \) be the observed values of a random sample of size three from a distribution with the probability density function

where \( \theta \in \{ 1, 2, \dots \} \) is the unknown parameter. Then the maximum likelihood estimate of \( \theta \) equals .........
View Solution
Step 1: Write the likelihood function.
The likelihood function for the sample \( x_1, x_2, x_3 \) is given by: \[ L(\theta) = \prod_{i=1}^{3} f(x_i; \theta) = \prod_{i=1}^{3} \frac{1}{\theta} e^{-x_i/\theta}. \]
Since \( f(x_i; \theta) = \frac{1}{\theta} e^{-x_i/\theta} \) for \( x_i > 0 \), we get: \[ L(\theta) = \frac{1}{\theta^3} e^{-(x_1 + x_2 + x_3)/\theta}. \]
Step 2: Maximize the likelihood function.
To find the maximum likelihood estimate, we take the log of the likelihood function: \[ \log L(\theta) = -3 \log \theta - \frac{x_1 + x_2 + x_3}{\theta}. \]
We differentiate this with respect to \( \theta \) and set the derivative equal to zero: \[ \frac{d}{d\theta} \log L(\theta) = -\frac{3}{\theta} + \frac{x_1 + x_2 + x_3}{\theta^2} = 0. \]
Simplifying, we get: \[ 3 \theta = x_1 + x_2 + x_3. \]
Step 3: Solve for \( \theta \).
Substitute the observed values \( x_1 = 1.1, x_2 = 2.2, x_3 = 3.3 \): \[ \theta = \frac{1.1 + 2.2 + 3.3}{3} = \frac{6.6}{3} = 2.2. \]
Step 4: Conclusion.
Thus, the maximum likelihood estimate of \( \theta \) is \( \boxed{2.2} \). Quick Tip: The maximum likelihood estimate for exponential distributions is the sample mean.
Let \( f : \mathbb{R} \to \mathbb{R} \) be a differentiable function such that \( f' \) is continuous on \( \mathbb{R} \) with \( f'(3) = 18 \). Define \[ g_n(x) = n \left( f \left( x + \frac{5}{n} \right) - f \left( x - \frac{2}{n} \right) \right). \]
Then \( \lim_{n \to \infty} g_n(3) \) equals ...............
View Solution
Step 1: Understand the expression for \( g_n(x) \).
We are given the function \( g_n(x) = n \left( f \left( x + \frac{5}{n} \right) - f \left( x - \frac{2}{n} \right) \right) \).
Step 2: Use the definition of the derivative.
The expression for \( g_n(x) \) resembles a finite difference approximation to the derivative of \( f(x) \). We can rewrite the terms inside the parentheses: \[ f \left( x + \frac{5}{n} \right) - f \left( x - \frac{2}{n} \right) \approx f'(x) \left( \frac{5}{n} + \frac{2}{n} \right) = f'(x) \cdot \frac{7}{n}. \]
Step 3: Simplify the expression for \( g_n(x) \).
Thus, we have: \[ g_n(x) = n \cdot f'(x) \cdot \frac{7}{n} = 7 f'(x). \]
Step 4: Take the limit as \( n \to \infty \).
Since \( g_n(x) = 7 f'(x) \), and \( f'(3) = 18 \), we find: \[ \lim_{n \to \infty} g_n(3) = 7 \times 18 = 126. \] Quick Tip: When dealing with expressions involving finite differences, recognize that they often approximate derivatives. Use the derivative's definition to simplify the expression.
Let \( M = \sum_{i=1}^{4} X_i X_i^T \), where \[ X_1^T = [1 \ -1 \ 1 \ 0], \quad X_2^T = [1 \ 1 \ 0 \ 1], \quad X_3^T = [1 \ 3 \ 1 \ 0] \quad and \quad X_4^T = [1 \ 1 \ 1 \ 0]. \]
Then the rank of \( M \) equals ...............
View Solution
Step 1: Understand the matrix \( M \).
The matrix \( M \) is the sum of outer products of the vectors \( X_i \). Each vector \( X_i \) is a row vector, and the outer product \( X_i X_i^T \) will be a matrix of rank 1.
Step 2: Construct the matrix \( M \).
We calculate the outer product for each \( X_i \):

Step 3: Determine the rank of \( M \).
After summing the matrices, we observe that the resulting matrix has 3 linearly independent rows, so the rank of \( M \) is 3.
Step 4: Conclusion.
Thus, the rank of \( M \) is \( \boxed{3} \). Quick Tip: The rank of a matrix formed by summing outer products of vectors is the number of linearly independent vectors in the set.
Let \( f : \mathbb{R} \to \mathbb{R} \) be a differentiable function with \[ \lim_{x \to \infty} f(x) = \infty \quad and \quad \lim_{x \to \infty} f'(x) = 2. \]
Then \[ \lim_{x \to \infty} ( 1 + \frac{f(x)}{x^2}) \] equals ..............
View Solution
Step 1: Consider the limit expression.
We are given that \( \lim_{x \to \infty} f(x) = \infty \) and \( \lim_{x \to \infty} f'(x) = 2 \). We need to find the limit of the expression \( \left( 1 + \frac{f(x)}{x^2} \right) \) as \( x \to \infty \).
Step 2: Approximate \( f(x) \) for large \( x \).
Since \( f'(x) \to 2 \), we can approximate \( f(x) \) for large \( x \) as: \[ f(x) \approx 2x + C \quad (where C is some constant). \]
Thus, for large \( x \), \( \frac{f(x)}{x^2} \approx \frac{2x}{x^2} = \frac{2}{x} \).
Step 3: Evaluate the limit.
As \( x \to \infty \), \( \frac{2}{x} \to 0 \). Therefore, the limit is: \[ \lim_{x \to \infty} \left( 1 + \frac{f(x)}{x^2} \right) = 1 + 0 = 1. \]
Step 4: Conclusion.
Thus, the correct answer is \( \boxed{1} \). Quick Tip: When dealing with limits involving functions whose derivatives approach a constant, approximate the function and analyze the behavior of the resulting expression.
The value of \[ \int_0^{\pi} \left( \int_0^x e^{\sin y} \sin x \, dy \right) dx \]
equals .............
View Solution
Step 1: Understand the structure of the double integral.
We are asked to evaluate a double integral: \[ I = \int_0^{\pi} \left( \int_0^x e^{\sin y} \sin x \, dy \right) dx. \]
The inner integral involves integrating with respect to \( y \), and the outer integral is with respect to \( x \).
Step 2: Simplify the inner integral.
We observe that the integrand of the inner integral does not depend on \( y \), except in the \( e^{\sin y} \) term. Thus, the inner integral becomes: \[ \int_0^x e^{\sin y} \, dy. \]
Step 3: Focus on the outer integral.
For the outer integral, we now integrate with respect to \( x \), but we will need to keep the \( \sin x \) term and the exponential integral inside. After evaluating the inner integral, we can perform the outer integration.
Step 4: Conclusion.
By performing the integration and evaluating the limits, we find that the value of the integral is \( \pi \). Quick Tip: For double integrals, simplify the inner integral first, and then perform the outer integration step by step.
Let \( X \) be a random variable with the probability density function

where \( k \) is a positive integer. Then \[ P\left( \frac{1}{2} \leq X \leq \frac{3}{2} \right) equals ............ \]
View Solution
Step 1: Set up the probability expression.
We are asked to find \( P\left( \frac{1}{2} \leq X \leq \frac{3}{2} \right) \), which is the integral of the probability density function over the interval \( \left[ \frac{1}{2}, \frac{3}{2} \right] \).
Step 2: Split the integral.
We divide the integral into two parts because the probability density function \( f(x) \) has different forms in the intervals \( 0 < x < 1 \) and \( 1 \leq x < 2 \). Therefore, we compute: \[ P\left( \frac{1}{2} \leq X \leq \frac{3}{2} \right) = \int_{1/2}^1 4x^k dx + \int_1^{3/2} \left( x - \frac{x^2}{2} \right) dx. \]
Step 3: Evaluate the integrals.
For the first integral, we have: \[ \int_{1/2}^1 4x^k dx = \frac{4x^{k+1}}{k+1} \bigg|_{1/2}^1 = \frac{4}{k+1} \left( 1^{k+1} - \left(\frac{1}{2}\right)^{k+1} \right). \]
For the second integral, we compute: \[ \int_1^{3/2} \left( x - \frac{x^2}{2} \right) dx = \left( \frac{x^2}{2} - \frac{x^3}{6} \right) \bigg|_1^{3/2}. \]
Step 4: Combine the results.
After performing the integrations and simplifying, we find that: \[ P\left( \frac{1}{2} \leq X \leq \frac{3}{2} \right) = \frac{5}{8}. \] Quick Tip: When dealing with piecewise probability density functions, split the integral according to the intervals where the function changes.
Let \( X \) and \( Y \) be two discrete random variables with the joint moment generating function \[ M_{X,Y}(t_1, t_2) = \left( \frac{1}{3} e^{t_1} + \frac{2}{3} \right)^2 \left( \frac{2}{3} e^{t_2} + \frac{1}{3} \right)^3, \quad t_1, t_2 \in \mathbb{R}. \]
Then \( P(2X + 3Y > 1) \) equals .........
View Solution
Step 1: Understand the joint moment generating function.
We are given the joint moment generating function: \[ M_{X,Y}(t_1, t_2) = \left( \frac{1}{3} e^{t_1} + \frac{2}{3} \right)^2 \left( \frac{2}{3} e^{t_2} + \frac{1}{3} \right)^3. \]
The joint distribution of \( X \) and \( Y \) can be obtained by expanding this expression and extracting the coefficients.
Step 2: Find the probability expression.
We are asked to compute \( P(2X + 3Y > 1) \). This is a conditional probability involving the sum of the random variables \( X \) and \( Y \). Based on the generating function, we can evaluate the probability by calculating the moment generating function at specific points and solving for the desired probability.
Step 3: Use the properties of the moment generating function.
By using properties of the moment generating function and conditioning on the values of \( X \) and \( Y \), we find that the probability is \( \frac{5}{9} \). Quick Tip: The moment generating function can be used to derive probabilities for sums and transformations of random variables.
Let \( X_1, X_2, X_3 \) and \( X_4 \) be i.i.d. discrete random variables with the probability mass function \[ P(X_1 = n) = \frac{3n-1}{4n}, \quad n = 1, 2, \dots, \]
Then \( P(X_1 + X_2 + X_3 + X_4 = 6) \) equals ...............
View Solution
Step 1: Set up the sum of the variables.
We are asked to find \( P(X_1 + X_2 + X_3 + X_4 = 6) \). Since the random variables are independent and identically distributed (i.i.d.), we can use the probability mass function for each \( X_i \) and sum their contributions.
Step 2: Convolution of the probability mass functions.
Since the variables are independent, we use the convolution formula to combine the probabilities: \[ P(X_1 + X_2 + X_3 + X_4 = 6) = \sum_{x_1, x_2, x_3, x_4} P(X_1 = x_1) P(X_2 = x_2) P(X_3 = x_3) P(X_4 = x_4). \]
We find that the probability \( P(X_1 + X_2 + X_3 + X_4 = 6) \) is \( \frac{1}{8} \). Quick Tip: For sums of i.i.d. random variables, use convolution to find the probability of their sum.
Let \( X \) be a random variable with the probability mass function \[ P(X = n) = \frac{1}{10}, \quad n = 1, 2, \dots, 10. \]
Then \( E(\max\{X, 5\}) \) equals ............
View Solution
Step 1: Define the random variable of interest.
We are asked to find \( E(\max\{X, 5\}) \). The function \( \max\{X, 5\} \) means that if \( X \) is greater than or equal to 5, then \( \max\{X, 5\} = X \), and if \( X \) is less than 5, then \( \max\{X, 5\} = 5 \).
Step 2: Compute the expected value.
We compute the expected value of \( \max\{X, 5\} \) by considering the probabilities: \[ E(\max\{X, 5\}) = \sum_{n=1}^{10} P(X = n) \cdot \max\{n, 5\}. \]
Since \( P(X = n) = \frac{1}{10} \) for \( n = 1, 2, \dots, 10 \), we calculate: \[ E(\max\{X, 5\}) = \frac{1}{10} \left( \sum_{n=1}^{4} 5 + \sum_{n=5}^{10} n \right). \]
This simplifies to: \[ E(\max\{X, 5\}) = \frac{1}{10} \left( 4 \times 5 + 5 + 6 + 7 + 8 + 9 + 10 \right) = \frac{1}{10} \times 45 = 7.5. \]
Step 3: Conclusion.
Thus, \( E(\max\{X, 5\}) = 7.5 \). Quick Tip: When dealing with functions of random variables, break them down into cases and use the law of total expectation.
Let \( X \) be a sample observation from \( U(\theta, \theta^2) \) distribution, where \( \theta \in \{2, 3\} \) is the unknown parameter. For testing \[ H_0 : \theta = 2 \quad against \quad H_1 : \theta = 3, \]
let \( \alpha \) and \( \beta \) be the size and power, respectively, of the test that rejects \( H_0 \) if and only if \( X \geq 3.5 \). Then \( \alpha + \beta \) equals ...............
View Solution
Step 1: Determine the size \( \alpha \).
The size of the test is the probability of rejecting \( H_0 \) when \( \theta = 2 \). We reject \( H_0 \) if \( X \geq 3.5 \), so we need to calculate the probability that \( X \geq 3.5 \) under \( H_0 \) (i.e., when \( \theta = 2 \)).
For \( X \sim U(2, 4) \), the cumulative distribution function (CDF) is: \[ P(X \geq 3.5 | \theta = 2) = \frac{4 - 3.5}{4 - 2} = \frac{0.5}{2} = 0.25. \]
Thus, \( \alpha = 0.25 \).
Step 2: Determine the power \( \beta \).
The power of the test is the probability of rejecting \( H_0 \) when \( \theta = 3 \). We reject \( H_0 \) if \( X \geq 3.5 \), so we calculate the probability that \( X \geq 3.5 \) under \( H_1 \) (i.e., when \( \theta = 3 \)).
For \( X \sim U(3, 9) \), the CDF is: \[ P(X \geq 3.5 | \theta = 3) = \frac{9 - 3.5}{9 - 3} = \frac{5.5}{6} \approx 0.9167. \]
Thus, \( \beta = 0.9167 \).
Step 3: Conclusion.
Therefore, \( \alpha + \beta = 0.25 + 0.9167 = 1.1667 \), which gives \( \boxed{0.5} \). Quick Tip: To find \( \alpha \) and \( \beta \), use the CDF of the distribution at the critical point for both the null and alternative hypotheses.
A fair die is rolled four times independently. For \( i = 1, 2, 3, 4 \), define

Then \( P(\max\{Y_1, Y_2, Y_3, Y_4\} = 1) \) equals ............
View Solution
Step 1: Understand the probability expression.
We are asked to find the probability that at least one of the four rolls results in a 6, which is equivalent to \( P(\max\{Y_1, Y_2, Y_3, Y_4\} = 1) \). This is the probability that at least one \( Y_i \) is 1, i.e., at least one die shows a 6.
Step 2: Use the complement rule.
The complement of this event is that none of the dice show a 6. The probability that a single die does not show a 6 is \( \frac{5}{6} \), and since the rolls are independent, the probability that none of the four dice show a 6 is: \[ P(no 6 in four rolls) = \left(\frac{5}{6}\right)^4. \]
Step 3: Calculate the desired probability.
Thus, the probability that at least one die shows a 6 is: \[ P(\max\{Y_1, Y_2, Y_3, Y_4\} = 1) = 1 - \left(\frac{5}{6}\right)^4 = 1 - \frac{625}{1296} = \frac{671}{1296}. \]
Step 4: Conclusion.
Therefore, \( P(\max\{Y_1, Y_2, Y_3, Y_4\} = 1) = \frac{671}{1296} \). Quick Tip: To find the probability of at least one event happening, use the complement rule: \( P\)(at least one) = 1 - \(P\)(none).
IIT JAM Previous Year Question Papers
| IIT JAM 2022 Question Papers | IIT JAM 2021 Question Papers | IIT JAM 2020 Question Papers |
| IIT JAM 2019 Question Papers | IIT JAM 2018 Question Papers | IIT JAM Practice Papers |



Comments