after playing with google Looker for my project, I decided to warm up my D3.js skill in case there is a need.
With D3.js and flask, we can build a beautiful website easily.
after playing with google Looker for my project, I decided to warm up my D3.js skill in case there is a need.
With D3.js and flask, we can build a beautiful website easily.
COUNTY ZIPCODE Howard 20759 263 20777 887 20833 89 21029 6519 21036 513 21042 19988 21043 2853 21044 527 21104 2839 21163 3572 21723 849 21737 1714 21738 2962 21765 21 21771 2619 21784 2014 21794 2136 21797 5149 Montgomery 20842 4 20871 4144 20872 7159 20876 11 21770 2 21771 169
---------------------------------------------------------
OUNTY CITY Howard BROOKEVILLE 89 CLARKSVILLE 6519 COLUMBIA 527 COOKSVILLE 849 DAYTON 513 ELLICOTT CITY 22841 FULTON 263 GLENELG 1714 GLENWOOD 2962 HIGHLAND 887 LISBON 21 MARRIOTTSVILLE 2839 MT. AIRY 2619 SYKESVILLE 2014 WEST FRIENDSHIP 2136 WOODBINE 5149 WOODSTOCK 3572 Montgomery CLARKSBURG 4144 DAMASCUS 7159 DICKERSON 4 GERMANTOWN 11 MONROVIA 2 MOUNT AIRY 3 MT AIRY 166
There are some other players in quantum computing. I am going to write this section little by little.
There are a few big quantum computer players from China and they are showing great progress too.
One hype of quantum computing definitely came from Google quantum computer since it once claimed the supremacy of quantum computing. See this article:
However, I did not have hands-on experience with google quantum computer at all.
The third quantum computer is the Amazon quantum computer. Here it is the link:
https://aws.amazon.com/braket/
It is a little different from IBM quantum computer and it comes a little later than IBM quantum computer. However, because of the widely used AWS service, AWS quantum computing is spreading fast and widely.
It is easy to get an hands-on experience too.
The second quantum computer I encountered is the IBM quantum computer. It is a more generalized quantum computer and it is one of the mainstream quantum computers.
You can create a personal account https://www.ibm.com/quantum and build some toy quantum computing problem there.
I worked with quantum computing and quantum computers for a long while before. I always keep an eye in this area too. I am not researcher, instead a quantum computing users. So I am not going deep into this area, instead focusing on application and some hands-on experience with quantum computing.
I will take a quick first route to write a few sections and gradually add more contents here.
My first contact with quantum computing is D-Wave, which is a specialized quantum computing-quantum annealing approach. I did some simulation on D-Wave quantum computer (it is really cool) and some research on the quantum annealing process too.
Some people would say D-Wave is not really a quantum computer. However, thinking of quantum computers are not general purpose computers, so I don't agree with such assessment.
Here is the link to D-Wave system.
In order to understand quantum computing, we need to know the basic qubit, which is the counterpart of bit of the conventional/classical computer. Bit has only two values 1 or 0 or on/off. Qubit could take infinite values in a 2D dimension, a.k, it has strength and direction.
There are three key ideas for qubit: superposition, entanglement, and interference.
This is a good article.
https://www.r-bloggers.com/2019/04/methods-for-dealing-with-imbalanced-data/
Unfortunately R glm does not have the class_weight variable directly. A little manipulation is needed to create the weights vector.
Unbalanced dataset is very common. For example, credit card transaction (majority of them are authentic), malware detection ( majority are benign), internet traffic( majority are friendly), CT-scan ( majority without tumor), etc.
Why we need to deal with it and how to deal with it. Here we are going to use Jupyter notebook to illustrate this problem.
I am writing this post little by little, so it may takes a few days to finish.
https://github.com/chaowu2009/ML_Projects/blob/master/ML_unbalanced_data.ipynb
grpd = df.groupby('name').size()
print(grpd)
grpd.reset_index().to_csv('result.csv')
https://www.analyticsvidhya.com/blog/2021/04/portfolio-optimization-using-mpt-in-python/
#!pip uninstall yfinance
#!pip uninstall pandas-datareader
#!pip install yfinance --upgrade --no-cache-dir
#!pip install pandas-datareader
# If you do it in the jupyter notebook, it may not work
# Do them in command line.
Dodd-Frank Act stress testing is a forward-looking exercise that assesses the impact on capital levels that would result from immediate financial shocks and nine quarters of severely adverse economic conditions. FHFA requires Fannie Mae and Freddie Mac to submit the results of stress tests based on two scenarios: a Baseline scenario and a Severely Adverse scenario. FHFA aligned the stress test scenario variables and assumptions with those used by the Board of Governors of the Federal Reserve System in its annual Dodd-Frank Act stress tests. As of March 2020, according to the Stress Testing of Regulated Entities Final Rule, the Federal Home Loan Banks are no longer required to conduct Dodd-Frank Act stress tests. As a prudential matter, FHFA expects the FHLBanks to continue to perform other stress tests (addressing market, credit, liquidity, and model risks) as outlined in FHFA's regulations and guidance.
In 2014 FHFA began requiring its regulated entities to conduct stress tests pursuant to the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (Dodd-Frank Act). As amended by Section 401 of the Economic Growth, Regulatory Relief, and Consumer Protection Act (EGRRCPA), the Dodd-Frank Act requires certain financial companies with total consolidated assets of more than $250 billion, and which are regulated by a primary federal financial regulatory agency, to conduct periodic stress tests to determine whether the companies have sufficient capital to absorb losses and support operations during adverse economic conditions.
https://www.fhfa.gov/SupervisionRegulation/DoddFrankActStressTests
Risk appetite The Firm’s overall appetite for risk is governed by a “Risk Appetite” framework. The framework and the Firm’s risk appetite are set and approved by the Firm’s CEO, Chief Financial Officer (“CFO”) and CRO. Quantitative parameters and qualitative factors are used to monitor and measure the Firm’s capacity to take risk consistent with its stated risk appetite. Qualitative factors have been established to assess select operational risks, and impact to the Firm’s reputation. Risk Appetite results are reported to the Board Risk Committee.
This is a good article to talk about Monte Carlo simulation.
https://www.mikulskibartosz.name/monte-carlo-simulation-in-python/
In [1]: import numpy as np
In [2]: import statsmodels.api as sm In [3]: import statsmodels.formula.api as smf # Load data In [4]: dat = sm.datasets.get_rdataset("Guerry", "HistData").data # Fit regression model (using the natural log of one of the regressors) In [5]: results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit() # Inspect the results In [6]: print(results.summary())
import matplotlib
matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
matplotlib.rc('font',family='
import matplotlib.pyplot as plt
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
matplotlib.rc('font', size=20)
matplotlib.rc('axes', titlesize=20)
matplotlib.rc('axes', labelsize=20)
matplotlib.rc('legend', fontsize=20)
matplotlib.rc('figure', titlesize=20)
#size=25
size=15
params = {'legend.fontsize': 'large',
'figure.figsize': (20,8),
'axes.labelsize': size,
'axes.titlesize': size,
'xtick.labelsize': size*0.75,
'ytick.labelsize': size*0.75,
'axes.titlepad': 25}
plt.rcParams.update(params)
Here, we are running Beta-PERT Monte Carlo simulation.
from scipy import stats as stats
from scipy.stats import beta as beta
from scipy.stats import rv_continuous
import matplotlib.pylab as plt
class Beta_PERT(rv_continuous):
def _shape(self, minimum, mode, maximum, lamb):
alpha = 1+lamb*(mode-minimum)/(maximum-minimum)
beta = 1+lamb*(maximum-mode)/(maximum-minimum)
return [alpha,beta]
def _cdf(self,x, minimum, mode, maximum, lamb):
s_alpha, s_beta = self._shape(minimum, mode, maximum, lamb)
z = (x-minimum)/(maximum-minimum)
cdf = beta.cdf(z,s_alpha,s_beta)
return cdf
pert = Beta_PERT(name="pert")
rv_1 = pert(0.02,0.05,0.2,4)
rv_2 = pert(1,5,20,4)
N = 5000
freq = rv_1.rvs(N)
loss = rv_2.rvs(N)
ALE = freq*loss
How to Supercharge Your Python Classes with Class Methods | by Siavash Yasini | May, 2024 | Towards Data Science As we just mentioned, a c...