数据挖掘习题之决策树算法

发布时间:2024-12-09 19:22

学习数据挖掘和机器学习的基础 #生活技巧# #工作学习技巧# #数字技能学习#

数据挖掘习题之决策树算法

最新推荐文章于 2024-09-27 10:18:03 发布

angulaer 于 2020-05-17 13:37:44 发布

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

根据processed.cleveland.data数据进行数据挖掘操作
UCI公开数据集-heartdisease,属性信息如下:
1.#3(age)
2.#4(sex)
3.#9(cp)
4.#10(trestbps)
5.#12(chol)
6.#16(fbs)
7.#19(restecg)
8.#32(thalach)
9.#38(exang)
10.#40(oldpeak)
11.#41(slope)
12.#44(ca)
13.#51(thal)
14.#58(num)(thepredictedattribute)
数据集参考网址:https://archive.ics.uci.edu/ml/datasets/Heart+Disease

import math import operator def calcShannonEnt(dataset): numEntries = len(dataset) labelCounts = {} for featVec in dataset: currentLabel = featVec[-1] if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 labelCounts[currentLabel] +=1 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob*math.log(prob, 2) return shannonEnt def CreateDataSet(): #字符串转化浮点数 def safe_float(number): try: return float(number) except: return None #读取数据 dataset=[] with open('processed.cleveland.data') as read_file: for line in read_file: line=line.replace('\n','').split(',') line=list(map(safe_float,line)) dataset.append(line) labels = ['age','sex','cp','trestbps','chol','fbs','restecg','thalach','thalach','exang','oldpeak','slope','ca','thal','num'] return dataset, labels def splitDataSet(dataSet, axis, value): retDataSet = [] for featVec in dataSet: if featVec[axis] == value: reducedFeatVec = featVec[:axis] reducedFeatVec.extend(featVec[axis+1:]) retDataSet.append(reducedFeatVec) return retDataSet def majorityCnt(classList): classCount ={} for vote in classList: if vote not in classCount.keys(): classCount[vote]=0 classCount[vote]=1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0] def chooseBestFeatureToSplit(dataSet): numberFeatures = len(dataSet[0])-1 baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1; for i in range(numberFeatures): featList = [example[i] for example in dataSet] uniqueVals = set(featList) newEntropy =0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy if(infoGain > bestInfoGain): bestInfoGain = infoGain bestFeature = i return bestFeature def createTree(dataSet, labels): classList = [example[-1] for example in dataSet] if classList.count(classList[0])==len(classList): return classList[0] if len(dataSet[0])==1: return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labels[bestFeat] myTree = {bestFeatLabel:{}} del(labels[bestFeat]) featValues = [example[bestFeat] for example in dataSet] uniqueVals = set(featValues) for value in uniqueVals: subLabels = labels[:] myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value), subLabels) return myTree MyData,label = CreateDataSet() #决策树ID3 createTree(MyData,label) 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697

运行结果:
在这里插入图片描述

网址:数据挖掘习题之决策树算法 https://www.yuejiaxmz.com/news/view/427414

相关内容

实时决策支持系统:数据挖掘中的实时光速分析
基于数据挖掘的最低生活保障决策支持系统研究
【案例】数据挖掘与生活:算法分类和应用
《python数据分析与挖掘》
数据挖掘算法与现实生活中的应用案例
决策支持系统:利用数据推动更好的决策
数据挖掘应用实例
大数据智能决策系统架构:大数据决策支持概述
关于“树”的算法:现实生活中的决策树
计算机辅助决策:改变我们的生活方式

随便看看