3.08苏c33333苏c33333苏c33333……分数

贵州2015年高考文史类分数段统计表(含加分)
贵州省2015年高考文史类分数段统计表(含加分)6月24日统计
            
掌上高招服务号
中国教育在线高考订阅号
版权所有 中国教育在线
CERNET Corporation
主讲:程中一
主讲:袁朝乐
扩展阅读  
免责声明:
① 凡本站注明“稿件来源:中国教育在线”的所有文字、图片和音视频稿件,版权均属本网所有,任何媒体、网站或个人未经本网协议授权不得转载、链接、转贴或以其他方式复制发表。已经本站协议授权的媒体、网站,在下载使用时必须注明“稿件来源:中国教育在线”,违者本站将依法追究责任。
② 本站注明稿件来源为其他媒体的文/图等稿件均为转载稿,本站转载出于非商业性的教育和科研之目的,并不意味着赞同其观点或证实其内容的真实性。如转载稿涉及版权等问题,请作者在两周内速来电或来函联系。
&&排名 学校名称      人气   相关推荐
数据来源:   
数据来源:   
数据来源:   
&&排名 专业名称    层次  人气  开设院校
数据来源:  
数据来源:  
数据来源:  
| 京ICP备号 |
CERNET Corporation腰围81厘米等于多少尺,腰围81厘米等于多少尺.这中间如何换算?
问题描述:
腰围81厘米等于多少尺,腰围81厘米等于多少尺.这中间如何换算?
问题解答:
1尺=0.米 那么81厘米就是2尺5
我来回答:
剩余:2000字
差不多2尺多一点,3尺就是1米
一尺33.3厘米,29码的裤子是2尺2寸,约合74厘米
1尺 = 33.3333333 厘米 肩宽=43.33cm腰围=73.26cm臀围=93.32cm
1尺=33.厘米84cm=84/33.=2.52尺
一尺是33.3厘米,两尺就是66.7厘米.
1厘米=0.03尺93CM就是2.79尺
裤子尺码对照表29码=2.2尺腰=73.5CM30码=2.3尺腰=77CM 31码=2.4尺腰=80CM 32码=2.5尺腰=83.5CM 33码=2.6尺腰=87CM 34码=2.7尺腰=90CM 36码=2.8尺腰 38码=2.9尺腰 40码=3.0尺腰裤子尺码对照表26号------1尺9寸臀围2尺6 32号--
如果是市尺胸围3尺2换算成厘米是106.66667厘米.腰围3尺3换算成厘米是110厘米.
1尺(市尺)=10寸,自己算吧
一尺是33.33厘米胸围=88.32厘米腰围=76.66厘米臀围=96.66肩宽=38.33袖长=55.66
1尺8=60cm,2尺8=93cm.胖不能说胖,甚至还可以再多5公斤.你主要问题就是身材比例有点失衡.有这个身材比例的主要原因是坐着的时间过长,而运动太少.对于女的来说,最好的塑身运动就是瑜伽、游泳.当然,慢跑、散步、登山也可以达到一定的效果.同时你再进行适当的扩胸运动.祝有好身材.
衣长2尺1寸,70厘米肩宽1尺3寸,43.3厘米袖长1尺6寸,53.3厘米胸围2尺9寸 96.6厘米腰围3尺零5寸 101.6厘米
是两尺八点二的腰围,建议买两尺九的裤子,穿着宽松有余地.
衣长2尺1寸,70厘米肩宽1尺3寸,43.3厘米袖长1尺6寸,53.3厘米胸围2尺9寸 96.6厘米腰围3尺零5寸 101.6厘米
100米=10000厘米 1米=100厘米110米70厘米=110.7米 2150厘米=21.5米 770厘米=7.7米 1010厘米=10.1米 1189厘米=11.89米 810厘米=8.1米 3429厘米=34.29米
27码相当于2尺1的腰围,69-72cm,建议楼主量下你的腰围确定下
100cm=3尺1尺=33.3cm1.45尺=48cm3.0尺=100cm2.65尺=88.3cm,臀部3.2尺=106.7cm,裤长3.15尺=105cm 再问: 恩谢谢~ 再答: 不客气 请采纳
一尺八九的样子,女生的话一般穿牛仔裤是25、26,或者小号.
2尺3,就是平时所说的裤子30的腰围.2.2尺腰=29码(74厘米)2.3尺腰=30码(76厘米)2.4尺腰=31码(80厘米)2.5尺腰=32码(84厘米)2.6尺腰=33码(87厘米)2.7尺腰=34码(90厘米)2.8尺腰=35码(94厘米)2.9尺腰=36码(98厘米)3.0尺腰=38码(100厘米)
也许感兴趣的知识1摄影小组为第一小队同学拍摄一张集体照.一张底片和三张照片共收费2.7元,加洗一张照片收费0.4元.第一小队共有15个同
问题描述:
1摄影小组为第一小队同学拍摄一张集体照.一张底片和三张照片共收费2.7元,加洗一张照片收费0.4元.第一小队共有15个同学,如果每人要一张照片(底片费由15人平均分摊),那么平均每人应付多少元?2有一根铁丝,第一次用去它的一半多一米,第二次正好用去剩下的一半,最后剩下2.5米,求铁丝原长?3在( )内填上加减乘除或括号,使等式成立0.5( )0.5( )0.5()0.5( )0.5=1
问题解答:
需要加洗 15-3=12张 共要 12×0.4+2.7=7.5元 平均每人应付 7.5÷15=0.5元 2 .设整长为X,则第一次用的:x/2 -1 剩 x/2 +1; 第2次用了:(x/2 +1)/2 +1 剩 2.5米 总长=第一次用的 + 第2次用的 +2.5 x =[x/2 -1] + [ (x/2 +1)/2 +1 ] +2.5 整理 得x=12米 3.0.5(+)0.5( ÷)0.5(-)0.5( -)0.5=1
我来回答:
剩余:2000字
每人应付0.5元
已有三张照片,所以只需加印15-3=12张,共计12*0.4=4.8元因为算的是多付的钱,就是4.8/15=0.32元
参加合影的同学至少有x人,依题意得:0.6+0.4xx≤0.5,∵x为正整数,∴0.6+0.4x≤0.5x,解得:x≥6,答:参加合影的同学至少有6人.
2.70+12*0.4=7.57.5/15=0.5
2.70里含有3张照片了,只需多加洗12张就够了12×0.4=2.8元2.8+2.7=5.5元5.5÷15≈0.37元
[4+(5-3)*0.5]/5=1
至多6人0.5a>0.35a+0.80.15a>0.8a>5.3333~循环所以至少六人x=(3-4m)/23-4m>0得,m 再问: 谢谢,请问第二问你会么? 再答: x=(3-4m)/2 正数3-4m>0得,m
设;参加合影的同学至少有x人0.4x+0.6小于等于0.5x0.4x-0.5x小于等于-0.6-0.1x小于等于-0.6x大于等于6答;参加合影的同学至少有6人
参加合影的同学至少有x人.依题意:0.8x+2≤xx-0.8x≥20.2x≥2x≥10答:参加合影的同学至少有10人.
两个人两个人两个人两个人两个人两个人
6人 再问: 有过程吗? 再答: 0.8X
由于拍一张合影,送2张照片,则42个人,保证每个人一张照片,还需洗40张,共需要钱5.2+40*0.71=33.6元,平均每人需要花费33.6/42=0.8元.
一共45.5 因为 付24.5元附带4张相片,这样就有4张相片了,还差30张,每张0.7元,30人就是21元,加起来就是45.5元
(4+0.5×2)÷5=1
①:0.5a>0.35a+0.80.15a>0.8a>5.3所以至少六人②:设建楼面积为x平方米,由题意得:x≥,解得:x≥10000.答:建楼面积必须超过10000平方米,才能将建楼成本控制在每平方米2000元以下.
检查一下题目,条件好像有问题. 设有X人,则1/2X(X-1)=435解得,X1=30,X2=-29(舍去)这个班有30人.435*2*0.8=696元.
总共要花钱 9+(18-3)*0.6 = 18元每人平均一下 18/18 = 1元,所以平均每人应付1块钱
0.5x7=3.5(元) 3.5+5=8.5(元) 8.5÷10=0.85(元) 答:平均每人付0.85元
题意不明啊.如果意思是 前4张照片共12.8元,而加洗之后7张照片共17元 的话,那就是(17-12.8)/(7-4)=1.4 元 平均每加洗一张照片1.4元 再问: 题就是这样的 再答: 呵呵,那就好,希望能帮到你~
也许感兴趣的知识  一、分布式估算圆周率
  1.计算原理
  假设正方形的面积S等于x?,而正方形的内切圆的面积C等于Pi×(x/2)?,因此圆面积与正方形面积之比C/S就为Pi/4,于是就有Pi=4×C/S。
  可以利用计算机随机产生大量位于正方形内部的点,通过点的数量去近似表示面积。假设位于正方形中点的数量为Ps,落在圆内的点的数量为Pc,则随机点的数量趋近于无穷时,4×Pc/Ps将逼近于Pi。
  2.IDEA下直接运行
  (1)启动IDEA,Create New Project-Scala-选择JDK和Scala SDK(Create-Browse-/home/jun/scala-2.12.6/lib下的所有jar包)-Finish
  (2)右键src-New-Package-输入com.jun-OK  
  (3)File-Project Structure-Libraries-+Java-/home/jun/spark-2.3.1-bin-hadoop2.7-jars下的所有jar包-OK
  (4)右键com.jun - Name(sparkPi)- Kind(Object)- OK,在编辑区写入下面的代码
package com.jun
import scala.math.random
import org.apache.spark._
object sparkPi {
def main(args: Array[String]){
val conf = new SparkConf().setAppName("spark Pi")
val spark = new SparkContext(conf)
val slices = if (args.length & 0) args(0).toInt else 2
val n = 100000 * slices
val count = spark.parallelize(1 to n, slices).map { i =&
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y & 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)
spark.stop()
  (5)Run-Edit Configuration-+-Application-写入下面的运行参数配置-OK
  (6)右键单击代码编辑区-Run sparkPi
  出现了一个错误,这个问题是因为版本不匹配导致的,通过查看Spark官网可以看到,spark-2.3.1仅支持scala-2.11.x所以要将scala换成2.11版本。
Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/O)Lscala/collection/mutable/ArrayO
at org.apache.spark.internal.config.ConfigHelpers$.stringToSeq(ConfigBuilder.scala:48)
at org.apache.spark.internal.config.TypedConfigBuilder$$anonfun$toSequence$1.apply(ConfigBuilder.scala:124)
at org.apache.spark.internal.config.TypedConfigBuilder$$anonfun$toSequence$1.apply(ConfigBuilder.scala:124)
at org.apache.spark.internal.config.TypedConfigBuilder.createWithDefault(ConfigBuilder.scala:142)
at org.apache.spark.internal.config.package$.&init&(package.scala:152)
at org.apache.spark.internal.config.package$.&clinit&(package.scala)
at org.apache.spark.SparkConf$.&init&(SparkConf.scala:668)
at org.apache.spark.SparkConf$.&clinit&(SparkConf.scala)
at org.apache.spark.SparkConf.set(SparkConf.scala:94)
at org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:76)
at org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:75)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:231)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:462)
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:462)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
at org.apache.spark.SparkConf.&init&(SparkConf.scala:70)
at org.apache.spark.SparkConf.&init&(SparkConf.scala:57)
at com.jun.sparkPi$.main(sparkPi.scala:8)
at com.jun.sparkPi.main(sparkPi.scala)
Process finished with exit code 1
  Spark官网在spark2.3.1版本介绍中有这么一段说明,于是将scala版本换成2.11.8,然而又由于idea和scala插件版本不对应,最后决定采取联网安装scala插件的办法。
Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, Spark 2.3.1 uses Scala 2.11. You will need to use a compatible Scala version (2.11.x).
&  然后再执行,在一对日志文本中找到输出的结果:
11:00:17 INFO
DAGScheduler:54 - ResultStage 0 (reduce at sparkPi.scala:16) finished in 0.779 s
11:00:17 INFO
DAGScheduler:54 - Job 0 finished: reduce at sparkPi.scala:16, took 1.286323 s
Pi is roughly 3.13792
11:00:18 INFO
AbstractConnector:318 - Stopped Spark@2c9399a4{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
11:00:18 INFO
BlockManagerInfo:54 - Removed broadcast_0_piece0 on master:35290 in memory (size: 1176.0 B, free: 323.7 MB)
  3.分布式运行前的准备
  分布式运行是指在客户端以命令行方式向Spark集群提交jar包的运行方式,所以需要将上述程序变异成jar包。
  (1)File-Project Structure-Artifacts-+-jar-From modules with dependencies-将Main Class设置为com.jun.sparkPi-OK-在Output Layout下只留下一个compile output-OK
  (2)Build-Build Artifacts-Build
  (3)将输出的jar包复制到Spark安装目录下
[jun@master bin]$ cp /home/jun/IdeaProjects/sparkAPP/out/artifacts/sparkAPP_jar/sparkAPP.jar /home/jun/spark-2.3.1-bin-hadoop2.7/
  4.分布式运行
  (1)本地模式
[jun@master bin]$ /home/jun/spark-2.3.1-bin-hadoop2.7/bin/spark-submit --master local --class com.jun.sparkPi /home/jun/spark-2.3.1-bin-hadoop2.7/sparkAPP.jar
  结果为本地命令行输出:
11:12:21 INFO
TaskSetManager:54 - Finished task 1.0 in stage 0.0 (TID 1) in 34 ms on localhost (executor driver) (2/2)
11:12:21 INFO
DAGScheduler:54 - ResultStage 0 (reduce at sparkPi.scala:16) finished in 1.591 s
11:12:21 INFO
TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool
11:12:21 INFO
DAGScheduler:54 - Job 0 finished: reduce at sparkPi.scala:16, took 1.833831 s
Pi is roughly 3.14082
11:12:21 INFO
AbstractConnector:318 - Stopped Spark@285f09de{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
11:12:21 INFO
SparkUI:54 - Stopped Spark web UI at http://master:4040
11:12:21 INFO
MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
11:12:21 INFO
MemoryStore:54 - MemoryStore cleared
11:12:21 INFO
BlockManager:54 - BlockManager stopped
  (2)Hadoop Yarn-cluster模式
[jun@master spark-2.3.1-bin-hadoop2.7]$ bin/spark-submit --master yarn --deploy-mode cluster sparkAPP.jar
  命令行返回处理信息:
11:17:14 INFO
Client:54 -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.1.102
ApplicationMaster RPC port: 0
queue: default
start time: 4
final status: SUCCEEDED
tracking URL: http://master:18088/proxy/application_1_0002/
  结果在Tracking URL里的logs中的stdout中查看
11:17:14 INFO
DAGScheduler:54 - ResultStage 0 (reduce at sparkPi.scala:16) finished in 0.910 s
11:17:14 INFO
DAGScheduler:54 - Job 0 finished: reduce at sparkPi.scala:16, took 0.970826 s
Pi is roughly 3.14076
11:17:14 INFO
AbstractConnector:318 - Stopped Spark@76017b73{HTTP/1.1,[http/1.1]}{0.0.0.0:0}
11:17:14 INFO
SparkUI:54 - Stopped Spark web UI at http://slave1:41837
11:17:14 INFO
YarnAllocator:54 - Driver requested a total number of 0 executor(s).
  (3)Hadoop Yarn-client模式
[jun@master spark-2.3.1-bin-hadoop2.7]$ bin/spark-submit --master yarn --deploy-mode client sparkAPP.jar
  结果就在本地客户端查看
11:20:21 INFO
TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 3592 ms on slave1 (executor 1) (2/2)
11:20:21 INFO
YarnScheduler:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool
11:20:21 INFO
DAGScheduler:54 - ResultStage 0 (reduce at sparkPi.scala:16) finished in 12.041 s
11:20:21 INFO
DAGScheduler:54 - Job 0 finished: reduce at sparkPi.scala:16, took 13.017473 s
Pi is roughly 3.1387
11:20:22 INFO
AbstractConnector:318 - Stopped Spark@29a6924f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
11:20:22 INFO
SparkUI:54 - Stopped Spark web UI at http://master:4040
11:20:22 INFO
YarnClientSchedulerBackend:54 - Interrupting monitor thread
11:20:22 INFO
YarnClientSchedulerBackend:54 - Shutting down all executors
11:20:22 INFO
YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor t
&  5.代码分析
  二、基于Spark MLlib的贷款风险预测
  1.计算原理
  有一个CSV文件,里面存储的是用户信用数据集。例如,
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #49,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #99,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #1,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #22,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #71,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #41,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #,<span style="color: #
  在用户信用度数据集里,每条样本用两个类别来标记,1(可信)和0(不可信),每个样本的特征包括21个字段,其中第一个字段1或0表示是否可信,另外20个特征字段分别为:存款、期限、历史记录、目的、数额、储蓄、是否在职、分期付款额、婚姻、担保人、居住时间、资产、年龄、历史信用、居住公寓、贷款、职业、监护人、是否有电话、外籍。
  其中运用了决策树模型和随机森林模型来对银行信用贷款的风险做分类预测。
  2.运行程序
  (1)在IDEA中新建Scala项目,包,类,配置Project SDK与Scala SDK,将csv文件复制到新建的项目下,将下面的代码复制到代码编辑区
package com.jun
import org.apache.spark._
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.feature.StringIndexer
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.tuning.{ ParamGridBuilder, CrossValidator }
import org.apache.spark.ml.{ Pipeline, PipelineStage }
import org.apache.spark.mllib.evaluation.RegressionMetrics
object Credit {
case class Credit(
creditability: Double,
balance: Double, duration: Double, history: Double, purpose: Double, amount: Double,
savings: Double, employment: Double, instPercent: Double, sexMarried: Double, guarantors: Double,
residenceDuration: Double, assets: Double, age: Double, concCredit: Double, apartment: Double,
credits: Double, occupation: Double, dependents: Double, hasPhone: Double, foreign: Double
def parseCredit(line: Array[Double]): Credit = {
line(1) - 1, line(2), line(3), line(4), line(5),
line(6) - 1, line(7) - 1, line(8), line(9) - 1, line(10) - 1,
line(11) - 1, line(12) - 1, line(13), line(14) - 1, line(15) - 1,
line(16) - 1, line(17) - 1, line(18) - 1, line(19) - 1, line(20) - 1
def parseRDD(rdd: RDD[String]): RDD[Array[Double]] = {
rdd.map(_.split(",")).map(_.map(_.toDouble))
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("SparkDFebay")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext._
import sqlContext.implicits._
val creditDF = parseRDD(sc.textFile("germancredit.csv")).map(parseCredit).toDF().cache()
creditDF.registerTempTable("credit")
creditDF.printSchema
creditDF.show
sqlContext.sql("SELECT creditability, avg(balance) as avgbalance, avg(amount) as avgamt, avg(duration) as avgdur
FROM credit GROUP BY creditability ").show
creditDF.describe("balance").show
creditDF.groupBy("creditability").avg("balance").show
val featureCols = Array("balance", "duration", "history", "purpose", "amount",
"savings", "employment", "instPercent", "sexMarried", "guarantors",
"residenceDuration", "assets", "age", "concCredit", "apartment",
"credits", "occupation", "dependents", "hasPhone", "foreign")
val assembler = new VectorAssembler().setInputCols(featureCols).setOutputCol("features")
val df2 = assembler.transform(creditDF)
val labelIndexer = new StringIndexer().setInputCol("creditability").setOutputCol("label")
val df3 = labelIndexer.fit(df2).transform(df2)
val splitSeed = 5043
val Array(trainingData, testData) = df3.randomSplit(Array(0.7, 0.3), splitSeed)
val classifier = new RandomForestClassifier().setImpurity("gini").setMaxDepth(3).setNumTrees(20).setFeatureSubsetStrategy("auto").setSeed(5043)
val model = classifier.fit(trainingData)
val evaluator = new BinaryClassificationEvaluator().setLabelCol("label")
val predictions = model.transform(testData)
model.toDebugString
val accuracy = evaluator.evaluate(predictions)
println("accuracy before pipeline fitting" + accuracy)
val rm = new RegressionMetrics(
predictions.select("prediction", "label").rdd.map(x =&
(x(0).asInstanceOf[Double], x(1).asInstanceOf[Double]))
println("MSE: " + rm.meanSquaredError)
println("MAE: " + rm.meanAbsoluteError)
println("RMSE Squared: " + rm.rootMeanSquaredError)
println("R Squared: " + rm.r2)
println("Explained Variance: " + rm.explainedVariance + "\n")
val paramGrid = new ParamGridBuilder()
.addGrid(classifier.maxBins, Array(25, 31))
.addGrid(classifier.maxDepth, Array(5, 10))
.addGrid(classifier.numTrees, Array(20, 60))
.addGrid(classifier.impurity, Array("entropy", "gini"))
val steps: Array[PipelineStage] = Array(classifier)
val pipeline = new Pipeline().setStages(steps)
val cv = new CrossValidator()
.setEstimator(pipeline)
.setEvaluator(evaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(10)
val pipelineFittedModel = cv.fit(trainingData)
val predictions2 = pipelineFittedModel.transform(testData)
val accuracy2 = evaluator.evaluate(predictions2)
println("accuracy after pipeline fitting" + accuracy2)
println(pipelineFittedModel.bestModel.asInstanceOf[org.apache.spark.ml.PipelineModel].stages(0))
pipelineFittedModel
.bestModel.asInstanceOf[org.apache.spark.ml.PipelineModel]
.stages(0)
.extractParamMap
val rm2 = new RegressionMetrics(
predictions2.select("prediction", "label").rdd.map(x =&
(x(0).asInstanceOf[Double], x(1).asInstanceOf[Double]))
println("MSE: " + rm2.meanSquaredError)
println("MAE: " + rm2.meanAbsoluteError)
println("RMSE Squared: " + rm2.rootMeanSquaredError)
println("R Squared: " + rm2.r2)
println("Explained Variance: " + rm2.explainedVariance + "\n")
  (2)编辑启动配置,Edit Configuration-Application-Name(Credit),Main Class(com.jun.Credit),Program arguments(/home/jun/IdeaProjects/Credit),VM options(-Dspark.master=local -Dspark.app.name=Credit -server -XX:PermSize=128M -XX:MaxPermSize=256M)
  (3)Run Credit
  (4)控制台输出结果为:日志INFO太多了,看不到啥。考虑将INFO日志隐藏,方法就是将spark安装文件夹下的默认日志配置文件拷贝到工程的src下并修改在控制台显示的日志的级别。
[jun@master conf]$ cp /home/jun/spark-2.3.1-bin-hadoop2.7/conf/log4j.properties.template /home/jun/IdeaProjects/Credit/src/
[jun@master conf]$ cd /home/jun/IdeaProjects/Credit/src/
[jun@master src]$ mv log4j.properties.template log4j.properties
[jun@master src]$ gedit log4j.properties
  在日志的配置文件中修改日志级别,只将ERROR级别的日志输出在控制台
log4j.rootCategory=ERROR, console
  再次运行,最后的结果为:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
|-- creditability: double (nullable = false)
|-- balance: double (nullable = false)
|-- duration: double (nullable = false)
|-- history: double (nullable = false)
|-- purpose: double (nullable = false)
|-- amount: double (nullable = false)
|-- savings: double (nullable = false)
|-- employment: double (nullable = false)
|-- instPercent: double (nullable = false)
|-- sexMarried: double (nullable = false)
|-- guarantors: double (nullable = false)
|-- residenceDuration: double (nullable = false)
|-- assets: double (nullable = false)
|-- age: double (nullable = false)
|-- concCredit: double (nullable = false)
|-- apartment: double (nullable = false)
|-- credits: double (nullable = false)
|-- occupation: double (nullable = false)
|-- dependents: double (nullable = false)
|-- hasPhone: double (nullable = false)
|-- foreign: double (nullable = false)
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+
|creditability|balance|duration|history|purpose|amount|savings|employment|instPercent|sexMarried|guarantors|residenceDuration|assets| age|concCredit|apartment|credits|occupation|dependents|hasPhone|foreign|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+
2.0|1049.0|
0.0|2799.0|
9.0| 841.0|
0.0|2122.0|
0.0|2171.0|
0.0|2241.0|
0.0|3398.0|
0.0|1361.0|
3.0|1098.0|
3.0|3758.0|
0.0|3905.0|
1.0|6187.0|
3.0|1957.0|
10.0|7582.0|
3.0|1936.0|
3.0|2647.0|
0.0|3939.0|
3.0|3213.0|
3.0|2337.0|
0.0|7228.0|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+
only showing top 20 rows
+-------------+------------------+------------------+------------------+
|creditability|
avgbalance|
+-------------+------------------+------------------+------------------+
1.0|1..|19.856|
+-------------+------------------+------------------+------------------+
+-------+------------------+
+-------+------------------+
| stddev|1.8938|
+-------+------------------+
+-------------+------------------+
|creditability|
avg(balance)|
+-------------+------------------+
0.0|0.3333|
1.0|1.2857|
+-------------+------------------+
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+
|creditability|balance|duration|history|purpose|amount|savings|employment|instPercent|sexMarried|guarantors|residenceDuration|assets| age|concCredit|apartment|credits|occupation|dependents|hasPhone|foreign|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+
2.0|1049.0|
0.0|(20,[1,2,3,4,6,7,...|
0.0|2799.0|
0.0|(20,[1,2,4,6,7,8,...|
9.0| 841.0|
0.0|[1.0,12.0,2.0,9.0...|
0.0|2122.0|
1.0|[0.0,12.0,4.0,0.0...|
0.0|2171.0|
1.0|[0.0,12.0,4.0,0.0...|
0.0|2241.0|
1.0|[0.0,10.0,4.0,0.0...|
0.0|3398.0|
1.0|[0.0,8.0,4.0,0.0,...|
0.0|1361.0|
1.0|[0.0,6.0,4.0,0.0,...|
3.0|1098.0|
0.0|[3.0,18.0,4.0,3.0...|
3.0|3758.0|
0.0|(20,[0,1,2,3,4,5,...|
0.0|3905.0|
0.0|(20,[1,2,4,6,7,8,...|
1.0|6187.0|
0.0|[0.0,30.0,4.0,1.0...|
3.0|1957.0|
0.0|[0.0,6.0,4.0,3.0,...|
10.0|7582.0|
0.0|[1.0,48.0,3.0,10....|
3.0|1936.0|
0.0|[0.0,18.0,2.0,3.0...|
3.0|2647.0|
0.0|[0.0,6.0,2.0,3.0,...|
0.0|3939.0|
0.0|[0.0,11.0,4.0,0.0...|
3.0|3213.0|
0.0|[1.0,18.0,2.0,3.0...|
3.0|2337.0|
0.0|[1.0,36.0,4.0,3.0...|
0.0|7228.0|
0.0|[3.0,11.0,4.0,0.0...|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+
only showing top 20 rows
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+-----+
|creditability|balance|duration|history|purpose|amount|savings|employment|instPercent|sexMarried|guarantors|residenceDuration|assets| age|concCredit|apartment|credits|occupation|dependents|hasPhone|foreign|
features|label|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+-----+
2.0|1049.0|
0.0|(20,[1,2,3,4,6,7,...|
0.0|2799.0|
0.0|(20,[1,2,4,6,7,8,...|
9.0| 841.0|
0.0|[1.0,12.0,2.0,9.0...|
0.0|2122.0|
1.0|[0.0,12.0,4.0,0.0...|
0.0|2171.0|
1.0|[0.0,12.0,4.0,0.0...|
0.0|2241.0|
1.0|[0.0,10.0,4.0,0.0...|
0.0|3398.0|
1.0|[0.0,8.0,4.0,0.0,...|
0.0|1361.0|
1.0|[0.0,6.0,4.0,0.0,...|
3.0|1098.0|
0.0|[3.0,18.0,4.0,3.0...|
3.0|3758.0|
0.0|(20,[0,1,2,3,4,5,...|
0.0|3905.0|
0.0|(20,[1,2,4,6,7,8,...|
1.0|6187.0|
0.0|[0.0,30.0,4.0,1.0...|
3.0|1957.0|
0.0|[0.0,6.0,4.0,3.0,...|
10.0|7582.0|
0.0|[1.0,48.0,3.0,10....|
3.0|1936.0|
0.0|[0.0,18.0,2.0,3.0...|
3.0|2647.0|
0.0|[0.0,6.0,2.0,3.0,...|
0.0|3939.0|
0.0|[0.0,11.0,4.0,0.0...|
3.0|3213.0|
0.0|[1.0,18.0,2.0,3.0...|
3.0|2337.0|
0.0|[1.0,36.0,4.0,3.0...|
0.0|7228.0|
0.0|[3.0,11.0,4.0,0.0...|
+-------------+-------+--------+-------+-------+------+-------+----------+-----------+----------+----------+-----------------+------+----+----------+---------+-------+----------+----------+--------+-------+--------------------+-----+
only showing top 20 rows
accuracy before pipeline fitting0.8242
MSE: 0.22442
MAE: 0.22442
RMSE Squared: 0.20106
R Squared: -0.0956
Explained Variance: 0.64424
accuracy after pipeline fitting0.2331
RandomForestClassificationModel (uid=rfc_3146cd3eaaac) with 60 trees
MSE: 0.23759
MAE: 0.2376
RMSE Squared: 0.43247
R Squared: -0.39494
Explained Variance: 0.29524
Process finished with exit code 0
  从accuracy before pipeline fitting0.8242和accuracy after pipeline fitting0.2331可以看到,程序可以用管道训练得到的最优模型进行预测应用,将预测结果与标签做比较,预测结果取得了75.24%的准确率,而使用标签则取得了72.64的准确率。
  3.代码分析
阅读(...) 评论()

我要回帖

更多关于 徐州葛大正苏c33333 的文章

 

随机推荐