怎样使用构造柱工程量计算一个计算农场

下载作业帮安装包
扫二维码下载作业帮
1.75亿学生的选择
一个农场用拖拉机耕地,3台3小时耕地1.8公顷。照这样计算,4台2小时耕地多少公顷?
为您推荐:
扫描下载二维码Ramsey理论中图的构造与计算--《华中科技大学》2008年博士论文
Ramsey理论中图的构造与计算
【摘要】:
Ramsey理论是组合数学中的一个重要分支,在通信、计算机信息检索和决策学等方面有一系列的具体应用,近年来在与其他数学分支的相互渗透中得到了迅猛的发展。Ramsey数,Ramsey重数和Folkman数是Ramsey理论的重要研究对象,目前只有很少的Ramsey数,Ramsey重数和Folkman数的精确值被确定,人们知道的上下界大多相距很远,进一步的工作,即使是给出较好的上下界,面对的都是非常巨大的计算量。本文对Ramsey理论中的若干问题作了相关的研究,讨论了经典Ramsey数、广义2色Ramsey数、路与圈的3色Ramsey数、Ramsey重数和Folkman数等有关问题,主要研究内容如下:
(1)研究Ramsey图的性质有助于寻找新的Ramsey图,以改进Ramsey数的下界。本文研究了Ramsey图与准正则图之间的关系,提出了用构造准正则图的方法寻找新的Ramsey图,特别是(5,5)-图;验证了(5,5)Ramsey图中不存在强正则自补图;通过寻找满足一定条件的同构的子图,得到了三个经典Ramsey数R(6,8),R(7,9)和R(8,17)的新下界129,235和937。
(2)确定小的R(C_m,B_n),R(K_(m,n),K_(p,q))型Ramsey数以及R(B_m,B_n)的精确值是一个非常困难的问题,目前也都只得到了其中很少的精确值。本文通过计算得到了若干R(C_m,B_n)的值,计算得到R(B_3,B_4)=15,利用正则图得到了R(K_(2,5),K_(2,9)),R(K_(2,6),K_(2,9)),R(K_(2,7),K_(2,9))的新下界,从而确定了其精确值。
(3)对于3色的广义Ramsey数,对3色圈,路以及圈和路混合形式的Ramsey数研究较多,目前只得到了若干小的圈和路混合形式的Ramsey数的精确值,以及R(P_3,C_k,C_k),R(P_3,P_4,C_k),R(P_3,P_5,C_k)和R(P_4,P_4,C_k)的精确值。本文利用计算机和数学证明相结合,得到了一些新的路,以及圈和路混合形式的Ramsey数,以及若干R(P_4,P_5,C_k)和R(P_4,P_6,C_k)的精确值。
(4)2002年,S.P.Radziszowski和Kung-Kuen Tse给出R(C_4,K_9)≤33,R(C_4,K_(10))≤40。本文将R(C_4,K_9)和R(C_4,K_9)的上界分别改进为32和39,并得到了若干C_4对完全图的多色Ramsey数的上下界。
(5)图G的重数M(G)定义为:对K_(R(G,G))的所有可能的边2-着色中,含有单色子图G的最少的个数。直到2001年,人们才求出了了4阶的Ramsey重数的精确值,其中,M(K_4)的精确值直到2001年才被解决。本文通过计算机算法,求得了17个5阶图的Ramsey重数精确值,通过模拟退火算法得到了5个5阶图的Ramsey重数的上界。
(6)本文研究了一些Folkman图的性质和算法,通过计算得到了F_v(3,5;6),F_v(3,6;7),F_v(3,7;8)和F_v(3,8;9)的新上界,分别为16,18,22和23。并给出了F_v(3,k;k+1)类型的Folkman数的新上界公式;将F_v(4,4;5)新的上界由25改进到23,下界由16改进到17;证明了当k≥4时,F_v(k,k;k+1)≥4k-1;给出了F_e(3,4;5)的下界22。
(7)引进了多重图的Ramsey数的概念,并对它进行了系统的研究,推广了许多关于经典Ramsey数的相应结果。
【关键词】:
【学位授予单位】:华中科技大学【学位级别】:博士【学位授予年份】:2008【分类号】:O157.5【目录】:
Abstract6-8
1 绪论10-28
1.1 研究背景10-13
1.2 研究目的13-14
1.3 图的Ramsey理论简史与研究现状14-21
1.4 研究思路与创新点21-22
1.5 图论的基本概念22-26
1.6 研究内容与组织结构26-28
2 几个经典Ramsey数的下界28-38
2.1 强正则自补图与Ramsey数R(5,5)28-29
2.2 准正则图与Ramsey数29-34
2.3 R(6,8),R(7,9)和R(8,17)的新下界34-37
2.4 本章小结37-38
3 广义2色Ramsey数的计算38-46
3.1 若干Ramsey数R(C_m,B_n)的精确值38-40
3.2 若干Ramsey数R(K_(m,n),K_(p,q))的精确值40-42
3.3 Ramsey数R(B_2,B_3)和R(B_3,B_4)的精确值42-45
3.4 本章小结45-46
4 广义多色Ramsey数46-78
4.1 若干路与圈的3色Ramsey数的精确值46-73
4.2 C_4对K_n的Ramsey数的上下界73-77
4.3 本章小结77-78
5 Ramsey重数的计算78-85
5.1 Ramsey重数精确值的计算78-81
5.2 Ramsey重数的上界81-84
5.3 M(K_4;43)的上界84
5.4 本章小结84-85
6 若干Folkman数的上下界85-99
6.1 F_v(4,4;5)的界85-88
6.2 F_v(3,k;k+1)的上界88-93
6.3 F_v(k,k;k+1)的下界93-95
6.4 F_v(2,3,3;4)和F_v(2,2,2,3;4)的界95-98
6.5 F_e(3,4;5)的下界98
6.6 本章小结98-99
7 经典Ramsey数的多重图推广99-107
7.1 简单图的Ramsey数推广到多重图的Ramsey数99-103
7.2 f_k~(k-1)(q)的下界和Alon等人的一篇文章103-106
7.3 本章小结106-107
8 总结与展望107-110
8.1 全文总结107-108
8.2 尚待研究的工作108-110
致谢110-112
参考文献112-119
附录1 攻读学位期间发表的学术论文119-120
附录2 博士学位论文章节内容与博士期间发表论文的关系120
欢迎:、、)
支持CAJ、PDF文件格式
【引证文献】
中国硕士学位论文全文数据库
赵文飞;[D];国防科学技术大学;2010年
【参考文献】
中国期刊全文数据库
李雨生;[J];科学通报;2001年18期
李雨生,臧文安;[J];数学进展;2001年01期
王清贤;[J];信息工程学院学报;1997年01期
【共引文献】
中国期刊全文数据库
Rajneesh Kumar·R;[J];Acta Mechanica S2009年05期
皮晓明;;[J];纯粹数学与应用数学;2008年03期
赵克文,曾克扬;[J];工科数学;2002年05期
谢建民;姚兵;;[J];甘肃科学学报;2012年01期
赵文飞;梁美莲;许晓东;陈挚;;[J];广西大学学报(自然科学版);2009年06期
;[J];Acta Mathematica Sinica(English Series);2009年05期
许康华,黄庆学;[J];浙江大学学报(理学版);2002年06期
WOODALL Douglas R;;[J];Science in China(Series A:Mathematics);2009年05期
张忠辅;Douglas R.WOODALL;姚兵;陈祥恩;李敬文;卞量;;[J];中国科学(A辑:数学);2008年11期
许进;范月科;;[J];计算机学报;2009年12期
中国重要会议论文全文数据库
赵文飞;谢政;许晓东;陈挚;;[A];第二十九届中国控制会议论文集[C];2010年
中国博士学位论文全文数据库
吴亚平;[D];华中师范大学;2011年
邹青松;[D];山东大学;2011年
吴吉昌;[D];西北工业大学;2003年
孙永奇;[D];大连理工大学;2006年
阿依古丽·马木提;[D];新疆大学;2007年
高云澍;[D];山东大学;2009年
侯建锋;[D];山东大学;2009年
余桂东;[D];安徽大学;2012年
中国硕士学位论文全文数据库
赵文飞;[D];国防科学技术大学;2010年
李萍;[D];山东师范大学;2002年
亓杰芝;[D];山东师范大学;2002年
周书明;[D];江西师范大学;2002年
顾华;[D];河海大学;2003年
许晓东;[D];中国人民解放军国防科学技术大学;2002年
白路锋;[D];河海大学;2004年
宋洪雪;[D];河海大学;2004年
孔立;[D];山东大学;2005年
王伟;[D];大连理工大学;2006年
【同被引文献】
中国期刊全文数据库
王清贤;[J];信息工程学院学报;1997年01期
中国硕士学位论文全文数据库
许晓东;[D];中国人民解放军国防科学技术大学;2002年
【二级参考文献】
中国期刊全文数据库
李雨生,臧文安;[J];数学进展;2001年01期
【相似文献】
中国期刊全文数据库
李皓;[J];科学通报;1988年06期
孙学红;[J];清华大学学报(自然科学版);1991年03期
扈生彪,任运平;[J];数学理论与应用;2004年01期
施容华;[J];系统科学与数学;1985年01期
赵光复;[J];科学通报;1992年14期
黄元秋,刘彦佩;[J];数学年刊A辑(中文版);1997年05期
董伟,许宝刚;[J];南京师大学报(自然科学版);2004年03期
钟波,谢挺;[J];西华大学学报(自然科学版);2005年04期
许小伟;孙艳华;;[J];广东工业大学学报;2009年03期
于罡;宋海洲;;[J];华侨大学学报(自然科学版);2010年06期
中国重要会议论文全文数据库
师海忠;;[A];中国运筹学会第十届学术交流会论文集[C];2010年
周继军;杨著;钮心忻;杨义先;;[A];第九届全国青年通信学术会议论文集[C];2004年
;[A];中国运筹学会第六届学术交流会论文集(下卷)[C];2000年
吴学江;;[A];中国通信学会第五届学术年会论文集[C];2008年
樊建席;杨季文;;[A];苏州市自然科学优秀学术论文汇编()[C];2010年
中国博士学位论文全文数据库
汪定国;[D];上海大学;2013年
张翠;[D];北京交通大学;2011年
王秀梅;[D];郑州大学;2007年
孙中举;[D];汕头大学;2010年
郑文萍;[D];大连理工大学;2007年
晏静之;[D];兰州大学;2008年
王维忠;[D];兰州大学;2013年
杨利民;[D];大连理工大学;2006年
黄佳;[D];中国科学技术大学;2007年
王建锋;[D];新疆大学;2010年
中国硕士学位论文全文数据库
徐恒舟;[D];郑州大学;2013年
张淑芹;[D];山东师范大学;2008年
赵洪涛;[D];浙江师范大学;2010年
王丹;[D];大连理工大学;2002年
熊昌森;[D];新疆师范大学;2010年
赵敏;[D];河北师范大学;2013年
董伟;[D];南京师范大学;2004年
蔡俊青;[D];山东师范大学;2009年
郝欣;[D];大连理工大学;2004年
乔晶;[D];大连理工大学;2005年
&快捷付款方式
&订购知网充值卡
400-819-9993
《中国学术期刊(光盘版)》电子杂志社有限公司
同方知网数字出版技术股份有限公司
地址:北京清华大学 84-48信箱 大众知识服务
出版物经营许可证 新出发京批字第直0595号
订购热线:400-819-82499
服务热线:010--
在线咨询:
传真:010-
京公网安备75号闲来无事,做了一个开心农场的利润计算器(更新。。0.4)
功能:找出当前利润最大的作物使用方法:新建一个文本文档,打开,然后拷入下面的代码,保存并关闭,把扩展名改为htm或jsp(如果用IE要改成htm,打开后会弹出提示,允许运行就行了),然后用浏览器打开这个文件就能用了。计算一种或多种都可以,出现的莫名字符可以无视,没有影响。输入作物的预计收入,然后点计算,利润就算出来了!&
有个问题:三季作物再成熟时间,是不是指再成熟一次的时间,比如苹果是三季的,成熟时间是10,再成熟时间是5,那么总共用时=10+5*2,对不?
<span style="color: #版更新内容:
&1.又修正了一个使用化肥时的疏漏,太大意了。。。
<span style="color: #版更新内容:
&1.又修正了使用化肥时的问题,试了几次算的都准,有待进一步测试。
&2.精简了代码。
<span style="color: #版更新内容:
&1.发现竟然没减去买种子的成本哈,这回改了。
&没有完全测试,所以肯定有BUG,以后会慢慢更新,如果拿去用了,要留名啊,更新了好通知。第一次用这个编东西,还在学习阶段,如果算不准忘谅解。&测试记录:&已测:白萝卜、胡萝卜&&html&&head&&title&开心农场利润计算器&/title&&/head&&body&& &p&&&& 白萝卜预计收入:&input type="text" name="tf_1"&每小时利润:&input type="text" name="fresult" disabled&& &/p&& &p&&&& 胡萝卜预计收入:&&&&& &input type="text" name="tf_2"&每小时利润:&input type="text" name="fresult2" disabled&&/p&& &p&&&& 玉米预计收入:&&&&& &input type="text" name="tf_22"&每小时利润:&input type="text" name="fresult22" disabled&& &/p&& &p&&&& 土豆预计收入:&&&&& &input type="text" name="tf_23"&每小时利润:&input type="text" name="fresult23" disabled&& &/p&& &p&&&& 茄子预计收入:&&&&& &input type="text" name="tf_24"&每小时利润:&input type="text" name="fresult24" disabled&& &/p&& &p&&&& 西红柿预计收入:&&&&& &input type="text" name="tf_25"&每小时利润:&input type="text" name="fresult25" disabled&& &/p&& &p&&&& 豌豆预计收入:&&&&& &input type="text" name="tf_26"&每小时利润:&input type="text" name="fresult26" disabled&& &/p&& &p&&&& 辣椒预计收入:&&&&& &input type="text" name="tf_27"&每小时利润:&input type="test" name="fresult27" disabled&& &/p&& &p&&&& 南瓜预计收入:&&&&& &input type="text" name="tf_28"&每小时利润:&input type="text" name="fresult28" disabled&& &/p&& &p&&&& 苹果预计收入:&&&&& &input type="text" name="tf_pingguo"&每小时利润:&input type="text" name="tfr_pingguo" disabled&& &/p&& &p&&&& 西瓜预计收入:&&&&& &input type="text" name="tf_xigua"&每小时利润:&input type="text" name="tfr_xigua" disabled&& &/p&...
分享这篇日志的人也喜欢
热门日志推荐
人人最热标签
分享这篇日志的人常去
北京千橡网景科技发展有限公司:
文网文[号··京公网安备号·甲测资字
文化部监督电子邮箱:wlwh@··
文明办网文明上网举报电话: 举报邮箱:&&&&&&&&&&&&
请输入手机号,完成注册
请输入验证码
密码必须由6-20个字符组成
下载人人客户端
品评校花校草,体验校园广场1555人阅读
Build a ComputeFarmby 04/21/2005
Some programs can be made to run faster by dividing them up into smaller
pieces and running these pieces on multiple processors. This is known as
parallel computing, and a large number of hardware and software systems
exist to facilitate it. The most famous example of a (distributed) parallel
program is , but
there are many other applications, including ray tracing, database searching,
code breaking, neural network training, genetic algorithms, and a whole host of
where a brute force approach is needed.
is an open
source Java framework for developing and running parallel programs. Under the
covers, ComputeFarm runs on Jini, which brings code mobility and fault tolerance
to network applications. From version 2.1 onwards, Jini is being , so this is an exciting time for Jini.
This article introduces ComputeFarm and illustrates how to run parallel programs
The Replicated-Worker Pattern
ComputeFarm grew out of an implementation in JavaSpaces (itself a part of
Jini) of the Replicated-Worker pattern from the definitive book on
JavaSpaces, , by Eric Freeman, Susanne Hupfer, and Ken
Arnold. In this pattern, also know as the Master-Worker pattern, a master
process creates a collection of tasks that need to be run. Workers take tasks
from the collection and run them, then hand the computed result to the master. A
space is a natural conduit for passing messages between master and workers, due
to the decoupled programming style it encourages.
In ComputeFarm, a ComputeSpace holds Task objects
and result objects of the type Object. Each worker's lifecycle is
as follows:
Wait for an available Task from the ComputeSpace.
Execute the Task.
Put the Task result back into the ComputeSpace.
Go to step 1.
There are typically many workers, an hence the term
replicated. This pattern neatly provides load balancing, whereby each
worker contributes whatever resources it can afford. The worker on a faster
machine will execute more Tasks than the worker on a slower or
otherwise he and as long as the granularity of the
Tasks is sufficiently fine, no one worker will hold up the
computation.
A client of ComputeFarm (the master process) will not usually think in terms
of the workers doing their work, but in terms of the overall problem they have
to solve, called a Job. From the client's point of view, this is
what happens:
The client creates a Job and specifies how to divide it into
Each Task is written into a ComputeSpace by the
Each Task in the ComputeSpace is turned into a
result by one of the replicated workers.
Results of executed Tasks are read from the
ComputeSpace by the Job and combined into an overall
result for the client.
Notice that these processes typically run concurrently. So, for example, the
client may be still dividing the Job into Tasks as the
computed results of earlier Tasks come back to be processed. For
the client, the ComputeSpace is simply a raw computing resource
where tasks are automatically executed as soon as they are dropped into the
space. In fact, the client doesn't know about the workers, and they do not
appear in the core API. The flow is shown schematically in Figure 1.
That's enough theory. Let's write a program to run on ComputeFarm.
A Simple Example
To see how ComputeFarm can be useful, consider the following very simple
example. If we wanted to calculate the sum of the first n squares, that
12 + 22 + 32 + ... + n2
then we might write the following piece of code (ignoring the fact that there
is a simple formula for this sum: n(n + 1)(2n + 1)/6): int sum = 0;for (int i = 1; i &= i++) {
sum += i *}
Now imagine for the sake of example that multiplication is a significantly
more expensive operation than addition. If we had multiple processors to run the
program on, it would be worthwhile to arrange for the squaring operations to be
shared among the processors to reduce the time to calculate the sum. In other
words, we break the problem into smaller sub-problems (squaring), and then
re-combine the sub-results to produce the final result (addition).
To run a Job, a ComputeFarm client gets a JobRunner
from a JobRunnerFactory. There are different implementations of
JobRunner that have different strategies for parallelizing
Task execution. For unit testing, the JobRunner
created by a SimpleJobRunnerFactory is handy, as it executes all
tasks in the same JVM. We shall see later how to run a computation on multiple
machines using the JavaSpaces implementation of JobRunner.
First of all, let's see how we can implement a program to calculate the sum
of squares, starting with the unit test: package putefarm.samples.import junit.framework.TestCimport putefarm.JobRimport putefarm.JobRunnerFimport putefarm.impl.simple.SimpleJobRunnerFpublic class SquaresJobTest extends TestCase {
public void test() {
int n = 10;
SquaresJob job = new SquaresJob(n);
JobRunnerFactory factory = new SimpleJobRunnerFactory();
JobRunner jobRunner = factory.newJobRunner(job);
jobRunner.run();
assertEquals(n * (n + 1) * (2 * n + 1) / 6, job.getSumOfSquares());
We create a new SquaresJob to calculate the sum for n =
10, and then obtain a JobRunner for it from the
SimpleJobRunnerFactory. Calling the run() method on
the runner blocks until the job completes, at which point we can retrieve the
overall result from the runner and check that it has the correct value.
The Job interface specifies two abstract methods:
void generateTasks(ComputeSpace space), where the implementor
specifies how the problem is broken up into Tasks.
void collectResults(ComputeSpace space), where the implementor
recombines the results from each Task into the final result.
Let's see the implementation of SquaresJob: package putefarm.samples.import putefarm.CancelledEimport putefarm.CannotTakeResultEimport putefarm.CannotWriteTaskEimport puteSimport putefarm.Jpublic class SquaresJob implements Job {
public SquaresJob(int n) {
public void generateTasks(ComputeSpace space) {
for (int i = 1; i &= i++) {
space.write(new SquaresTask(i));
} catch (CannotWriteTaskException e) {
} catch (CancelledException e) {
public void collectResults(ComputeSpace space) {
for (int i = 1; i &= i++) {
Integer result = (Integer) space.take();
sum += result.intValue();
} catch (CannotTakeResultException e) {
} catch (CancelledException e) {
public int getSumOfSquares() {
In the generateTasks() method, we create a
SquaresTask for each term in the sum and then write them into the
supplied ComputeSpace. Conversely, in the
collectResults() method, we take task results as they appear from
the space and sum them. Both method implementations deal with various exceptions
that may arise when interacting with the ComputeSpace; we shall
look at them in more detail later, but for the moment just note that we exit if
anything &bad& happens.
Finally, the listing for SquaresTask is very straightforward. It
implements the single execute() method of the Task
interface, which is an example of the . For this
example, the implementation squares an integer. SquaresTask has
also been marked Serializable to allow it to be marshalled using
RMI, as task instances will be when we use the JavaSpaces runner to share tasks
between machines. Similarly, the return type of the execute()
method must be Serializable. In this example
java.lang.Integer, the return type, satisfies this requirement.
package putefarm.samples.import java.io.Simport putefarm.Tpublic class SquaresTask implements Serializable, Task {
public SquaresTask(int k) {
public Object execute() {
return new Integer(k * k);
The test now runs and passes. The next step is to see how to run the program
on a network of machines.
Code Mobility
The key difference between running ComputeFarm in a single JVM (using
SimpleJobRunnerFactory), and multiple JVMs (using
JavaSpacesJobRunnerFactory) is that in the multiple JVM case, there
must be some way for the task implementations to be downloaded to each JVM. The
way ComputeFarm workers are able to be generic is by dynamic code
downloading, which in Java is most commonly achieved by using the . In a nutshell, this means that Java classes are made
available to remote JVMs by providing a codebase--a URL from where the
classes can be downloaded. The standard way of doing this is to package up the
class files that are needed by the remote JVM into a .jar that is conventionally
named something like myclasses-dl.jar, where dl indicates the
.jar is for downloading. Then the .jar is made downloadable by running a web
server to serve it up.
While all of this is well understood and not inherently difficult, it is an
area where there is considerable scope for configuration error (witness the
number of hits for a search on ). Of course, you can use any HTTP server, but to ease
development and deployment, ComputeFarm provides a lightweight embeddable server
called ClassServer that allows you to run the server in the same
JVM as the ComputeFarm client. This avoids some of the complexity of running a
separate HTTP daemon, since, for example, the embedded server takes care of
choosing a free port number and setting the
java.rmi.server.codebase property. You still have to create a
download .jar, specifying the classes to include in the .jar manually, perhaps
using the Ant Jar Task. Or you can use a tool that does dependency analysis,
which is a part of the Jini 2.0 package, or
from the Cheiron project.
If you want to avoid having to create a download .jar, consider using
ClasspathServer, which, as the name suggests, serves classes direct
from the classpath. It is very simple to deploy, but should be avoided in
production environments, since it imposes an overhead of one HTTP transfer per
class, which can be prohibitive for systems that have many classes or workers.
Furthermore, ClassServer has no security controls governing which
classes are downloaded.
Running the Example Program
The ComputeFarm client is really just an extension of the
SquaresJobTest above. Instead of using a
SimpleJobRunnerFactory, we call
JobRunnerFactory.newInstance(), which creates a
JavaSpacesJobRunnerFactory by default. (The system property
putefarm.JobRunnerFactory can be used to control the
implementation class that is loaded by a call to
JobRunnerFactory.newInstance().)
Also, a ClasspathServer is started to allow the remote client to
download the class files for SquaresTask. package putefarm.samples.import putefarm.JobRimport putefarm.JobRunnerFimport putefarm.impl.javaspaces.util.ClasspathSpublic class SquaresClient {
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println(
&Usage: java & + SquaresClient.class.getName() + & &n&&);
System.exit(1);
ClasspathServer server = new ClasspathServer();
server.start();
int n = Integer.parseInt(args[0]);
SquaresJob job = new SquaresJob(n);
JobRunnerFactory factory = JobRunnerFactory.newInstance();
JobRunner jobRunner = factory.newJobRunner(job);
jobRunner.run();
System.out.println(&n = & + n);
System.out.println(&Sum of squares = & + job.getSumOfSquares());
System.out.println(&n * (n + 1) * (2 * n + 1) / 6 = &
+ (n * (n + 1) * (2 * n + 1) / 6));
Before we run SquaresClient, how do we set up our network of
workers? I find it convenient to have my development box running both JavaSpaces
and the client process (although it is fine to run these two instances on
separate machines), and to run a worker on every other available processor on
Each machine needs a Java 1.4 runtime, so start by
installing it on every box.
Next, set up JavaSpaces. I prefer running ComputeFarm on , since it is open
source (BSD license) and very easy to get up and running. However, ComputeFarm
can run on any JavaSpaces implementation--I have successfully used
(the JavaSpaces implementation that comes with Jini) and . To run Blitz
JavaSpaces:
(the latest stable release at the time of writing). Install by unzipping
the archive to a suitable location.
Download the Blitz JavaSpaces (Pure Java Edition) , and run it by typing java -jar
installer_pj_n_nn.jar, where n_nn is the
version number. On Windows, you can simply double-click the .jar to launch it.
Start Blitz JavaSpaces by typing blitz (Windows), or
blitz.sh (Unix) in the directory in which you installed it.
Next, install the workers. For each worker:
Get the latest
and unzip it.
In the directory you unzipped it in, type run.
(Alternatively, you can run workers as .) Finally,
run the client, not forgetting to specify a policy file that grants enough
permissions for Jini to work. The following Ant task shows one way of doing
this. (The , which includes the squares example, has a full Ant
file.) &java classname=&putefarm.samples.squares.SquaresClient& fork=&true&&
&arg value=&10& /&
&jvmarg value=&-Djava.security.policy=src/java/policy.all& /&
&classpath&
&pathelement location=&${jini.lib}/jini-core.jar& /&
&pathelement location=&${jini.lib}/jini-ext.jar& /&
&pathelement location=&${lib}/computefarm-0.8.2.jar& /&
&pathelement path=&${classes}& /&
&/classpath&&/java&
This is what you should see from the client: Buildfile: build.xmlsquares:
[java] n = 10
[java] Sum of squares = 385
[java] n * (n + 1) * (2 * n + 1) / 6 = 385
Let's look now at how things might go wrong when running a parallel program.
Fault Tolerance
When you move from a single process running your program to multiple
distributed processes, there is a change in kind in the types of failure that
programs can exhibit. Peter Deutsch famously listed the assumptions that do not
carry over to distributed computing in &.& The development of Jini can be understood as a
distributed computing model that takes these fallacies --it
doesn't try to &solve& them, it provides mechanisms for the programmer to deal
with them. For example, since the network is not reliable, all remote method
invocations throw java.rmi.RemoteException for the programmer to
handle in the way most appropriate to the application at that point.
Jini's design promotes a certain flexibility and robustness in many of its
applications. In ComputeFarm, the order in which services are started is not
important. You can start the client first, then the workers, then the JavaSpace,
and the system would still work. Or, if a worker crashes, the computation would
still continue (assuming there are other workers still running), since each task
is executed under a ,
which would be rolled back after the worker crashed, leaving the task in the
space for another worker to pick up. JavaSpace systems even clean up after
themselves. Any tasks or results left in the space from a cancelled job will be
removed when their
expire. Note that both the transaction timeout and the lease time in ComputeFarm
may be configured v see the
for details.
To make your program robust, you also need to consider how to handle the
three possible types of exception that can arise when dealing with the remote
ComputeSpace. Let's look at them in turn.
ComputeSpaceException
ComputeSpaceException is an unchecked exception that is thrown
when there is an unrecoverable error when writing or taking from the space. This
case is typically not caused by a networking problem, but by a resource
limitation (such as running out of disk space) or encountering an internally
inconsistent state (such as not being able to unmarshal a result object). Since
it is unchecked (and unrecoverable), you do not normally need to handle this
exception.
CannotWriteTaskException and
CannotTakeResultException
CannotWriteTaskException and
CannotTakeResultException are the same type
the only difference is that the first may be thrown during write operations, and
the second during take operations. These exceptions are thrown if there is a
transient problem while communicating with the space. They guarantee that the
state of the space will not have been changed by the method call, so a
write() call does not write the task to the space, and a
take() call does not remove a result from the space.
Users of the space can catch these exceptions to implement a retry strategy
(for example, retrying a limited number of times after a suitable interval),
safe in the knowledge that the space has not been left in an indeterminate
state. This is not quite the full story, since, due to the concurrent execution
of the generateTasks() and collectResults() methods,
it is possible that the client could block indefinitely on the call to
take(), if a call to write() has failed with a
CannotWriteTaskException and the client has stopped trying to do
any further writes.
The way to unblock the execution is to invoke the cancel()
method on the JobRunner that started the Job. This can
be done by the code in the Job implementation that implements the
retry strategy, when it has decided to give up. Alternatively, the client that
creates and runs the JobRunner can call cancel() when
it decides that the job has not completed in a sufficiently timely
manner--perhaps under user intervention, or a pre-defined timeout. The following
code illustrates the latter strategy for a job called
NonTerminatingJob, which blocks in the way described above. public class NonTerminatingClient {
public static void main(String[] args) throws Exception {
ClasspathServer server = new ClasspathServer();
server.start();
final NonTerminatingJob job = new NonTerminatingJob();
final JobRunner jobRunner = JobRunnerFactory.newInstance().newJobRunner(job);
Thread jobRunnerThread = new Thread() {
public void run() {
jobRunner.run();
jobRunnerThread.start();
System.out.println(&Waiting 5 seconds before cancelling job...&);
Thread.sleep(5000);
System.out.println(&Cancelling job...&);
jobRunner.cancel();
System.out.println(&Waiting for job runner to finish...&);
jobRunnerThread.join();
System.out.println(&Job runner finished.&);
The client creates a new thread to run the JobRunner for
NonTerminatingJob, and then it terminates the job after waiting for
five seconds by calling cancel() on the JobRunner.
CancelledException
CancelledException is a checked exception that is only thrown
when cancel() has been called on the associated
JobRunner. The way to handle this exception is obviously to stop
doing any more work and clean up. Here is the code for
NonTerminatingJob, and the output showing the sequence of calls:
public class NonTerminatingJob implements Job {
public void generateTasks(ComputeSpace computeSpace) {
public void collectResults(ComputeSpace computeSpace) {
System.out.println(&Collecting results...&);
computeSpace.take();
} catch (CannotTakeResultException e) {
e.printStackTrace();
} catch (CancelledException e) {
System.out.println(&Job cancelled.&);
}}Buildfile: build.xmlnonterminating:
[java] Waiting 5 seconds before cancelling job...
[java] Collecting results...
[java] Cancelling job...
[java] Waiting for job runner to finish...
[java] Job cancelled.
[java] Job runner finished.
Conclusion
For the example we looked at in this article, it was not really worth the
effort to recast it as a parallel program, since there are no real performance
benefits. However, there is a class of problems that is both computationally
expensive and can be split into independent chunks, where rewriting as a
parallel program is highly beneficial. (For example,
includes a distributed extension of JUnit,
and the training component of ,
the leading open source Java neural network framework, runs on ComputeFarm.) If
you have an application that fits this description, then consider using
ComputeFarm to distribute the processing effort across a network of machines.
documentation and downloads.
&& (PDF)(ACM Computing
Surveys, Vol. 21, No. 3, September 1989), by Nicholas Carriero and David
Gelernter, is an excellent paper describing various approaches to solving
problems using parallel programs.
Eric Freeman, Susanne Hupfer, and Ken Arnold, . Pearson Education, 1st edition (June 15,
1999). Despite its age, this book still is the best way to see the exciting
possibilities to be had with JavaSpaces.
Phillip Bishop and Nigel Warren, , Pearson
Education, 1st edition (September 20, 2002). Lots of good advice on getting the
best out of JavaSpaces.
to find the
latest Jini and JavaSpaces news. The
are worth keeping an
is lead Java developer at Kizoom, a
leading UK software company in the delivery of personalised travel
information.
View all .
Showing messages 1 through 13 of 13.
JavaSpaces and ComputeFarm
14:45:52&tomwhite [ | ] It's interesting that the
comments so far have latched on to JavaSpaces (and its alleged limitations),
is actually independent of JavaSpaces. One of ComputeFarm's goals is to
create a very simple API that facilitates running Java programs in parallel,
without caring about the underlying implementation.Now, of course
the underlying implementation cannot be totally ignored - it has to be reliable
and performant at least. There are other implementation choices here:
ActiveSpaces, Green Tea, Coherence etc., however the only implementation at the
moment is the JavaSpaces one. I think that the replicated worker pattern hits
JavaSpaces sweet spot - people are always moaning that there are no applications
for JavaSpaces - but I believe this is a good one! JavaSpaces is a healthy
technology, with at least three implementations in active development. So I'm a
bit wary of general statements that &JavaSpaces does not scale& - more detail
please. Equally, I'm sure there are projects that have shown JavaSpaces scaling
successes - anyone from the jini.org community care to
comment?Thanks,Tom
JavaSpaces and ComputeFarm
00:00:39&jdavies [ | ] Without wanting to start a
flaming war I think &rohit_reborn& is talking out of his finalizer! Please give
us some facts.At least &cpurdy& knows what he's talking about even if he
is wrong :-) Only joking Cam. I have seen some clever stuff done on Coherence
that stands up to serious production tests however I would still tend towards
JavaSpaces for the high CPU count. For the lower end CPU count (but still high
performance computing) I wouldn't want to have to bet too much money against
Coherence.Back to this article, thank you Tom. Personally I think the
readers could have handled a more complex example and it would have better
demonstrated the advantages of computeFarm. I'm not too familiar with
ComputeFarm in that I haven't used it but I would be interested to know what you
think it offers on top of raw JavaSpaces, RIO and OSGi etc. Probably one of
JavaSpaces best assets is its simplicity, whilst abstracting the JavaSpaces API
has advantages do you really think you could implement ComputeFarm on anything
else? The killer example would be something that works on J2EE as well as
JavaSpaces, it would provide a path forward for J2EE, forward into the grid
world using JavaSpaces.I enjoyed the paper, please don't take my
questions as criticism I'm just curious.Regards,-John-
JavaSpaces and ComputeFarm
07:16:21&cpurdy [ | ] & I have seen some clever
stuff done on Coherence that stands up to serious production tests however I
would still tend towards JavaSpaces for the high CPU count. For the lower end
CPU countHi John,This comment I definitely don't
understand.We have customers running data grids spread across hundreds
of CPUs, managing terabytes of HA data. That's one giant data fabric, one giant
&space&, one highly resilient, 100% available, dynamically scalable,
self-healing and lossless data grid, being managed in concert by hundreds of
CPUs.Not only that, but if we could find a demanding enough application,
it would easily scale out to thousands of CPUs. Our resource load per node (e.g.
threads, sockets, etc.) stays constant, regardless of the number of nodes.
That's one of the ways that we achieve linear horizontal scale-out.And
we do all that without any single points of failure. Machines come, machines go,
machines die, machines lock up, but the data grid not only keeps running, but it
does so without losing any of the data that it is managing, even when machines
are dying.Peace.
JavaSpaces and ComputeFarm
07:10:16&cpurdy [ | ]
JavaSpaces and ComputeFarm
14:47:38&tomwhite [ | ] I agree about the simple
example - originally I did plan a second, more complex, example in the piece,
but I decided against it as the article was getting rather long. The ComputeFarm
distribution has some more realistic , such as a
distributed JUnit. ComputeFarm is actually slightly higher-level than
JavaSpaces. It handles transactions for you, as well as Jini discovery. As for
whether it could be implemented on top of anything else, I'm confident it could
be, but I haven't tried. The key requirement is a framework that supports code
looks promising,
for example. Bringing ComputeFarm to the J2EE world would be interesting - ideas
on how to do this gratefully received!Cheers,Tom
ComputeFarm with cajo? 07:49:34&cajo
[ | ] Tom,Before trying to
port ComputeFarm to a proprietary platform, perhaps you might want to consider
trying a free one first. I think more developers could benefit from
that.An excellent candidate, (if I do say so myself :-) would be a
project I lead here at java.net. It is extremely easy to use, and works in
all runtime environments: EE, SE, and ME.I would be happy
to answer any questions you might have, and to incorporate any feedback you
might offer. I think you would be most intrigued by its capabilities.If
nothing else, please have a look at the short project
CatherinoProject Lead
Try Tangosol Coherence
11:46:47&cpurdy [ | ] You'll find that Coherence
is a lot simpler, a lot more natural, and solves the scalability problems that
spaces have. ()
Try Tangosol Coherence
14:36:40&aramallo [ | ] I've heard a couple of your
customers talking high on Coherence. Good job!I know Coherence is a
distributed cache and that you can use it (although not very well explained in
your site) to build a data grid.I think he confusion around Javaspaces
being a cache is reasonable since you can use Javaspaces as the foundation of a
cache solution as Gigaspaces had already done (I am a Gigaspaces customer
BTW)I looked at Javaspaces because of the tuple-based programming model,
because of JINI and becuase of its simplicity. Although I never considered
Coherence to build our virtual compute server I did consider using JGroup, if it
helps to clarify my point.Again, spaces scalability problems? Javaspaces
is a spec, are you talking about a specific implementation of it?BTW It is
not my intent to go into the same kind of thread I've seen in TSS about Tangosol
and Gigaspaces :-) Only to clarify why people might want to use Javaspaces.
Try Tangosol Coherence
07:09:42&cpurdy [ | ] & I know Coherence is a
distributed cache and that you can use it (although not very well explained in
your site) to build a data grid.Any application that uses Coherence can
be a part of (or a client of) a scale-out data grid that is provided by
Coherence. There is nothing extra to do, you just start more instances of the
application and the data grid gets larger (more capacity) and the conceptual
interconnects exponentially increase the resiliency of the data
fabric.The &space& (if you will) is managed by the grid, not by some
particular server, so the more servers you add to the grid, the larger and
faster and more resilient the data grid becomes.& I looked at
Javaspaces because of the tuple-based programming model The JavaSpace
model (tuples, LINDA, etc.) is totally different than the approach that we took
in Coherence. Basically, if you want a tuple space approach in Java, then you
should use the JavaSpaces APIs. (i.e. The right tool for the right problem
approach.)& Although I never considered Coherence to build our
virtual compute server I did consider using JGroup, if it helps to clarify my
point.Again, I think that if you have a tuple model, then JavaSpaces is
the best API choice for Java.Other models become extremely unwieldy to
squeeze into a tuple space based implementation.Peace.
Opinion vs Fact 02:21:24&dancres [ | ] JavaSpaces are used for a
good deal more than just compute farms. I'm aware of a number of industry
applications of my JavaSpaces implementation (Blitz) which include:Video
editing (and even video streaming)Systems monitoring and stats
collectionDocument processing (with objects in the multi-megabyte
range)Modelling systems (again with objects in the multi-megabyte
range)High speed event systems (one running at 1000s of events a
second)24x7 control systems (no, you don't need clustering to do
this)Many users cite their choice of JavaSpaces being because it's
simple, neat and fits their problem well - these users include the likes of
Lockheed Martin.Advocating the use of technology because it's tried and
tested is on the surface a good argument (you won't get fired for choosing a
database!) but the downside is that you stagnate - one technology cannot
possibly be the solution for all problems. Further, there's a danger of seeing
every issue from the perspective of that single technology which corrupts your
architectural vision and prevents you from making fully informed
decisions.Several postings talk about performance but don't cite any
data or benchmarks (including details such as setup, tuning involved etc). This
is a trend that was seen a few years back with Linux where it's competitors went
so far as to concoct benchmarks to suit their commercial needs. Many of the
resulting reports were discredited leaving the originators with egg on their
Overused Example and javaspace usage
sighted 05:41:44&rohit_reborn [ | ] Javaspace has tremendous
bottlenecks. The example sited is an over used one. Everyone in the
JINI,JavaSpace world know how to use Javaspace.Javaspace(JVM) memory
limitation. ain't worth spending time over it. A shared virtual memory is a much
better option compared to the JavaSpace or a shared distributed database.
Atleast we know Database are well matured stuff.
Overused Example and javaspace usage
sighted??? 14:20:49&aramallo [ | ] Your comment is really
confusing.Which are the Javaspaces tremendous bottlenecks? I am using a
Javaspaces implementation exactly for the same purpose explained in this post
and works great.Javaspaces memory limitation? or JVM memory
limitation?If you meant Javaspaces memory limitation let me point you out
that your comment is as incoherent as saying &JBDC memory limitations&,
Javaspaces is a spec (API) not an implementation. Are you referrying to a
specific implementation of the spec (Outrigger, Blitz, Gigaspaces,
etc.)?A shared virtual memory is a much better option ?Javaspaces is
not a spec for a distributed database!In fact it is a spec for a Jini
distributed shared memory service.On your point about
maturity...Javaspaces is a Java spec that follows the Tuple-based programming
paradigm developed at Yale University and implemented in the Linda Programming
Language in the early 80's. Linda is used for parallell programming. The
beauty about Javaspaces is the simplicity and power of the tuple-based
programming model and the fact that it is a JINI service.
Overused Example and javaspace usage
sighted 05:46:23&rohit_reborn [ | ] or atleast The essense of
the example remains the same. Everywhere on the net we get to see the use of
Javaspace. I don't like JavaSpace
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:99452次
积分:1435
积分:1435
排名:千里之外
原创:36篇
转载:25篇
评论:18条
(1)(1)(1)(1)(2)(1)(1)(2)(1)(1)(1)(3)(1)(4)(2)(1)(2)(1)(2)(1)(1)(1)(1)(2)(1)(2)(5)(1)(1)(1)(1)(1)(1)(2)(10)(2)(1)

我要回帖

更多关于 构造柱体积计算公式 的文章

 

随机推荐