Bayesian methods have proven themselves to be enormously successful across a wide range of scientific problems, with analyses ranging from the simple one-sample problem to complicated hierarchical models. However, Bayesian methods run into difficulties for two major and prevalent classes of problems: handling data sets with outliers and dealing with model misspecification. This research is focused on the development and implementation of a new method to handle both of these problems. In particular, we propose the use of what we call restricted likelihood in place of the originally posited likelihood. When working with the restricted likelihood, we summarize the data through a set of (insufficient) statistics $T(y)$ and update our prior distribution with the likelihood given $T(y)$ rather than the likelihood given $y$. By choice of $T(y)$, we retain the main benefits of Bayesian methods while reducing the sensitivity of the analysis to selected features of the data which are difficult (if not impossible) to model correctly. This talk will discuss the developed MCMC method for implementating the restricted likelihood and a hierarchical regression example applied to a messy insurance data set with many outliers.